Archive for the 'Ronald Dworkin' Category

Neo- and Other Liberalisms

Everybody seems to be worked up about “neoliberalism” these days. A review of Quinn Slobodian’s new book on the Austrian (or perhaps the Austro-Hungarian) roots of neoliberalism in the New Republic by Patrick Iber reminded me that the term “neoliberalism” which, in my own faulty recollection, came into somewhat popular usage only in the early 1980s, had actually been coined in the early the late 1930s at the now almost legendary Colloque Walter Lippmann and had actually been used by Hayek in at least one of his political essays in the 1940s. In that usage the point of neoliberalism was to revise and update the classical nineteenth-century liberalism that seemed to have run aground in the Great Depression, when the attempt to resurrect and restore what had been widely – and in my view mistakenly – regarded as an essential pillar of the nineteenth-century liberal order – the international gold standard – collapsed in an epic international catastrophe. The new liberalism was supposed to be a kinder and gentler — less relentlessly laissez-faire – version of the old liberalism, more amenable to interventions to aid the less well-off and to social-insurance programs providing a safety net to cushion individuals against the economic risks of modern capitalism, while preserving the social benefits and efficiencies of a market economy based on private property and voluntary exchange.

Any memory of Hayek’s use of “neo-liberalism” was blotted out by the subsequent use of the term to describe the unorthodox efforts of two young ambitious Democratic politicians, Bill Bradley and Dick Gephardt to promote tax reform. Bradley, who was then a first-term Senator from New Jersey, having graduated directly from NBA stardom to the US Senate in 1978, and Gephardt, then an obscure young Congressman from Missouri, made a splash in the first term of the Reagan administration by proposing to cut income tax rates well below the rates to which Reagan had proposed when running for President, in 1980, subsequently enacted early in his first term. Bradley and Gephardt proposed cutting the top federal income tax bracket from the new 50% rate to the then almost unfathomable 30%. What made the Bradley-Gephardt proposal liberal was the idea that special-interest tax exemptions would be eliminated, so that the reduced rates would not mean a loss of tax revenue, while making the tax system less intrusive on private decision-making, improving economic efficiency. Despite cutting the top rate, Bradley and Gephardt retained the principle of progressivity by reducing the entire rate structure from top to bottom while eliminating tax deductions and tax shelters.

Here is how David Ignatius described Bradley’s role in achieving the 1986 tax reform in the Washington Post (May 18, 1986)

Bradley’s intellectual breakthrough on tax reform was to combine the traditional liberal approach — closing loopholes that benefit mainly the rich — with the supply-side conservatives’ demand for lower marginal tax rates. The result was Bradley’s 1982 “Fair Tax” plan, which proposed removing many tax preferences and simplifying the tax code with just three rates: 14 percent, 26 percent and 30 percent. Most subsequent reform plans, including the measure that passed the Senate Finance Committee this month, were modelled on Bradley’s.

The Fair Tax was an example of what Democrats have been looking for — mostly without success — for much of the last decade. It synthesized liberal and conservative ideas in a new package that could appeal to middle-class Americans. As Bradley noted in an interview this week, the proposal offered “lower rates for the middle-income people who are the backbone of America, who are paying most of the freight.” And who, it might be added, increasingly have been voting Republican in recent presidential elections.

The Bradley proposal also offered Democrats a way to shed their anti-growth, tax-and-spend image by allowing them, as Bradley says, “to advocate economic growth and fairness simultaneously.” The only problem with the idea was that it challenged the party’s penchant for soak-the-rich rhetoric and interest-group politics.

So the new liberalism of Bradley and Gephardt was an ideological movement in the opposite direction from that of the earlier version of neoliberalism; the point of neoliberalism 1.0 was to moderate classical laissez-faire liberal orthodoxy; neoliberalism 2.0 aimed to counter the knee-jerk interventionism of New Deal liberalism that favored highly progressive income taxation to redistribute income from rich to poor and price ceilings and controls to protect the poor from exploitation by ruthless capitalists and greedy landlords and as an anti-inflation policy. The impetus for reassessing mid-twentieth-century American liberalism was the evident failure in the 1970s of wage and price controls, which had been supported with little evidence of embarrassment by most Democratic economists (with the notable exception of James Tobin) when imposed by Nixon in 1971, and by the decade-long rotting residue of Nixon’s controls — controls on crude oil and gasoline prices — finally scrapped by Reagan in 1981.

Although the neoliberalism 2.0 enjoyed considerable short-term success, eventually providing the template for the 1986 Reagan tax reform, and establishing Bradley and Gephardt as major figures in the Democratic Party, neoliberalism 2.0 was never embraced by the Democratic grassroots. Gephardt himself abandoned the neo-liberal banner in 1988 when he ran for President as a protectionist, pro-Labor Democrat, providing the eventual nominee, the mildly neoliberalish Michael Dukakis, with plenty of material with which to portray Gephardt as a flip-flopper. But Dukasis’s own failure in the general election did little to enhance the prospects of neoliberalism as a winning electoral strategy. The Democratic acceptance of low marginal tax rates in exchange for eliminating tax breaks, exemptions and shelters was short-lived, and Bradley himself abandoned the approach in 2000 when he ran for the Democratic Presidential nomination from the left against Al Gore.

So the notion that “neoliberalism” has any definite meaning is as misguided as the notion that “liberalism” has any definite meaning. “Neoliberalism” now serves primarily as a term of abuse for leftists to impugn the motives of their ideological and political opponents in exactly the same way that right-wingers use “liberal” as a term of abuse — there are so many of course — with which to dismiss and denigrate their ideological and political opponents. That archetypical classical liberal Ludwig von Mises was openly contemptuous of the neoliberalism that emerged from the Colloque Walter Lipmann and of its later offspring Ordoliberalism (frequently described as the Germanic version of neoliberalism) referring to it as “neo-interventionism.” Similarly, modern liberals who view themselves as upholders of New Deal liberalism deploy “neoliberalism” as a useful pejorative epithet with which to cast a rhetorical cloud over those sharing a not so dissimilar political background or outlook but who are more willing to tolerate the outcomes of market forces than they are.

There are many liberalisms and perhaps almost as many neoliberalisms, so it’s pointless and futile to argue about which is the true or legitimate meaning of “liberalism.” However, one can at least say about the two versions of neoliberalism that I’ve mentioned that they were attempts to moderate more extreme versions of liberalism and to move toward the ideological middle of the road: from the extreme laissez-faire of classical liberalism on the one right and from the dirigisme of the New Deal on the left toward – pardon the cliché – a third way in the center.

But despite my disclaimer that there is no fixed, essential, meaning of “liberalism,” I want to suggest that it is possible to find some common thread that unites many, if not all, of the disparate strands of liberalism. I think it’s important to do so, because it wasn’t so long ago that even conservatives were able to speak approvingly about the “liberal democratic” international order that was created, largely thanks to American leadership, in the post-World War II era. That time is now unfortunately past, but it’s still worth remembering that it once was possible to agree that “liberal” did correspond to an admirable political ideal.

The deep underlying principle that I think reconciles the different strands of the best versions of liberalism is a version of Kant’s categorical imperative: treat every individual as an end not a means. Individuals must not be used merely as tools or instruments with which other individuals or groups satisfy their own purposes. If you want someone else to serve you in accomplishing your ends, that other person must provide that assistance to you voluntarily not because you require him to do so. If you want that assistance you must secure it not by command but by persuasion. Persuasion can be secured in two ways, either by argument — persuading the other person to share your objective — or if you can’t, or won’t, persuade the person to share your objective, you can still secure his or her agreement to help you by offering some form of compensation to induce the person to provide you the services you desire.

The principle has an obvious libertarian interpretation: all cooperation is secured through voluntary agreements between autonomous agents. Force and fraud are impermissible. But the Kantian ideal doesn’t necessarily imply a strictly libertarian political system. The choices of autonomous agents can — actually must — be restricted by a set of legal rules governing the conduct of those agents. And the content of those legal rules must be worked out either by legislation or by an evolutionary process of common law adjudication or some combination of the two. The content of those rules needn’t satisfy a libertarian laissez-faire standard. Rather the liberal standard that legal rules must satisfy is that they don’t prescribe or impose ends, goals, or purposes that must be pursued by autonomous agents, but simply govern the means agents can employ in pursuing their objectives.

Legal rules of conduct are like semantic rules of grammar. Like rules of grammar that don’t dictate the ideas or thoughts expressed in speech or writing, only the manner of their expression, rules of conduct don’t specify the objectives that agents seek to achieve, only the acceptable means of accomplishing those objectives. The rules of conduct need not be libertarian; some choices may be ruled out for reasons of ethics or morality or expediency or the common good. What makes the rules liberal is that they apply equally to all citizens, and that the rules allow sufficient space to agents to conduct their own lives according to their own purposes, goals, preferences, and values.

In other words, the rule of law — not the rule of particular groups, classes, occupations — prevails. Agents are subject to an impartial legal standard, not to the will or command of another agent, or of the ruler. And for this to be the case, the ruler himself must be subject to the law. But within this framework of law that imposes no common goals and purposes on agents, a good deal of collective action to provide for common purposes — far beyond the narrow boundaries of laissez-faire doctrine — is possible. Citizens can be taxed to pay for a wide range of public services that the public, through its elected representatives, decides to provide. Those elected representatives can enact legislation that governs the conduct of individuals as long as the legislation does not treat individuals differently based on irrelevant distinctions or based on criteria that disadvantage certain people unfairly.

My view that the rule of law, not laissez-faire, not income redistribution, is the fundamental value and foundation of liberalism is a view that I learned from Hayek, who, in his later life was as much a legal philosopher as an economist, but it is a view that John Rawls, Ronald Dworkin on the left, and Michael Oakeshott on the right, also shared. Hayek, indeed, went so far as to say that he was fundamentally in accord with Rawls’s magnum opus A Theory of Justice, which was supposed to have provided a philosophical justification for modern welfare-state liberalism. Liberalism is a big tent, and it can accommodate a wide range of conflicting views on economic and even social policy. What sets liberalism apart is a respect for and commitment to the rule of law and due process, a commitment that ought to take precedence over any specific policy goal or preference.

But here’s the problem. If the ruler can also make or change the laws, the ruler is not really bound by the laws, because the ruler can change the law to permit any action that the ruler wants to take. How then is the rule of law consistent with a ruler that is empowered to make the law to which he is supposedly subject. That is the dilemma that every liberal state must cope with. And for Hayek, at least, the issue was especially problematic in connection with taxation.

With the possible exception of inflation, what concerned Hayek most about modern welfare-state policies was the highly progressive income-tax regimes that western countries had adopted in the mid-twentieth century. By almost any reasonable standard, top marginal income-tax rates were way too high in the mid-twentieth century, and the economic case for reducing the top rates was compelling when reducing the top rates would likely entail little, if any, net revenue loss. As a matter of optics, reductions in the top marginal rates had to be coupled with reductions of lower tax brackets which did entail revenue losses, but reforming an overly progressive tax system without a substantial revenue loss was not that hard to do.

But Hayek’s argument against highly progressive income tax rates was based more on principle than on expediency. Hayek regarded steeply progressive income tax rates as inherently discriminatory by imposing a disproportionate burden on a minority — the wealthy — of the population. Hayek did not oppose modest progressivity to ease the tax burden on the least well-off, viewing such progressivity treating as a legitimate concession that a well-off majority could allow to a less-well-off minority. But he greatly feared attempts by the majority to shift the burden of taxation onto a well-off minority, viewing that kind of progressivity as a kind of legalized hold-up, whereby the majority uses its control of the legislature to write the rules to their own advantage at the expense of the minority.

While Hayek’s concern that a wealthy minority could be plundered by a greedy majority seems plausible, a concern bolstered by the unreasonably high top marginal rates that were in place when he wrote, he overstated his case in arguing that high marginal rates were, in and of themselves, unequal treatment. Certainly it would be discriminatory if different tax rates applied to people because of their religion or national origin or for reasons unrelated to income, but even a highly progressive income tax can’t be discriminatory on its face, as Hayek alleged, when the progressivity is embedded in a schedule of rates applicable to everyone that reaches specified income thresholds.

There are other reasons to think that Hayek went too far in his opposition to progressive tax rates. First, he assumed that earned income accurately measures the value of the incremental contribution to social output. But Hayek overlooked that much of earned income reflects either rents that are unnecessary to call forth the efforts required to earn that income, in which case increasing the marginal tax rate on such earnings does not diminish effort and output. We also know as a result of a classic 1971 paper by Jack Hirshleifer that earned incomes often do not correspond to net social output. For example, incomes earned by stock and commodity traders reflect only in part incremental contributions to social output; they also reflect losses incurred by other traders. So resources devoted to acquiring information with which to make better predictions of future prices add less to output than those resources are worth, implying a net reduction in total output. Insofar as earned incomes reflect not incremental contributions to social output but income transfers from other individuals, raising taxes on those incomes can actually increase aggregate output.

So the economic case for reducing marginal tax rates is not necessarily more compelling than the philosophical case, and the economic arguments certainly seem less compelling than they did some three decades ago when Bill Bradley, in his youthful neoliberal enthusiasm, argued eloquently for drastically reducing marginal rates while broadening the tax base. Supporters of reducing marginal tax rates still like to point to the dynamic benefits of increasing incentives to work and invest, but they don’t acknowledge that earned income does not necessarily correspond closely to net contributions to aggregate output.

Drastically reducing the top marginal rate from 70% to 28% within five years, greatly increased the incentive to earn high incomes. The taxation of high incomes having been reducing so drastically, the number of people earning very high incomes since 1986 has grown very rapidly. Does that increase in the number of people earning very high incomes reflect an improvement in the overall economy, or does it reflect a shift in the occupational choices of talented people? Since the increase in very high incomes has not been associated with an increase in the overall rate of economic growth, it hardly seems obvious that the increase in the number of people earning very high incomes is closely correlated with the overall performance of the economy. I suspect rather that the opportunity to earn and retain very high incomes has attracted a many very talented people into occupations, like financial management, venture capital, investment banking, and real-estate brokerage, in which high incomes are being earned, with correspondingly fewer people choosing to enter less lucrative occupations. And if, as I suggested above, these occupations in which high incomes are being earned often contribute less to total output than lower-paying occupations, the increased opportunity to earn high incomes has actually reduced overall economic productivity.

Perhaps the greatest effect of reducing marginal income tax rates has been sociological. I conjecture that, as a consequence of reduced marginal income tax rates, the social status and prestige of people earning high incomes has risen, as has the social acceptability of conspicuous — even brazen — public displays of wealth. The presumption that those who have earned high incomes and amassed great fortunes are morally deserving of those fortunes, and therefore entitled to deference and respect on account of their wealth alone, a presumption that Hayek himself warned against, seems to be much more widely held now than it was forty or fifty years ago. Others may take a different view, but I find this shift towards increased respect and admiration for the wealthy, curiously combined with a supposedly populist political environment, to be decidedly unedifying.

Yes, Judges Do Make Law

Scott Sumner has just written an interesting comment to my previous post in which I criticized a remark made by Judge Gorsuch upon being nominated to fill the vacant seat on the Supreme Court — so interesting, in fact, that I think it is worth responding to him in a separate post.

First, here is the remark made by Judge Gorsuch to which I took exception.

I respect, too, the fact that in our legal order, it is for Congress and not the courts to write new laws. It is the role of judges to apply, not alter, the work of the people’s representatives. A judge who likes every outcome he reaches is very likely a bad judge . . . stretching for results he prefers rather than those the law demands.

I criticized Judge Gorsuch for denying what to me is the obvious fact that judges do make law. They make law, because the incremental effect of each individual decision results in a legal order that is different from the legislation that has been enacted by legislatures. Each decision creates a precedent that must be considered by other judges as they apply and construe the sum total of legislatively enacted statutes in light of, and informed by, the precedents of judges and the legal principles that have guided judges those precedents. Law-making by judges under a common law system — even a common law system in which judges are bound to acknowledge the authority of statutory law — is inevitable for many reasons, one but not the only reason being that statutes will sooner or later have to be applied in circumstances were not foreseen by that legislators who enacted those statutes.

To take an example of Constitutional law off the top of my head: is it an unreasonable search for the police to search the cell phone of someone they have arrested without first getting a search warrant? That’s what the Supreme Court had to decide two years ago in Riley v. California. The answer to that question could not be determined by reading the text of the Fourth Amendment which talks about the people being secure in their “persons, houses, papers, or effects” or doing a historical analysis of what the original understanding of the terms “search” and “seizure” and “papers and effects” was when the Fourth Amendment to the Constitution was enacted. Earlier courts had to decide whether government eavesdropping on phone calls violated the Fourth Amendment. And other courts have had to decide whether collecting meta data about phone calls is a violation. Answers to those legal questions can’t be found by reading the relevant legal text.

Here’s part of the New York Times story about the Supreme Court’s decision in Riley v. Califronia.

In a sweeping victory for privacy rights in the digital age, the Supreme Court on Wednesday unanimously ruled that the police need warrants to search the cellphones of people they arrest.

While the decision will offer protection to the 12 million people arrested every year, many for minor crimes, its impact will most likely be much broader. The ruling almost certainly also applies to searches of tablet and laptop computers, and its reasoning may apply to searches of homes and businesses and of information held by third parties like phone companies.

“This is a bold opinion,” said Orin S. Kerr, a law professor at George Washington University. “It is the first computer-search case, and it says we are in a new digital age. You can’t apply the old rules anymore.”

But he added that old principles required that their contents be protected from routine searches. One of the driving forces behind the American Revolution, Chief Justice Roberts wrote, was revulsion against “general warrants,” which “allowed British officers to rummage through homes in an unrestrained search for evidence of criminal activity.”

“The fact that technology now allows an individual to carry such information in his hand,” the chief justice also wrote, “does not make the information any less worthy of the protection for which the founders fought.”

Now for Scott’s comment:

I don’t see how Gorsuch’s view conflicts with your view. It seems like Gorsuch is saying something like “Judges should not legislate, they should interpret the laws.” And you are saying “the laws are complicated.” Both can be true!

Well, in a sense, maybe, because what judges do is technically not legislation. But they do make law; their opinions determine for the rest of us what we may legally do and what we may not legally do and what rights to expect will be respected  and what rights will not be respected. Judges can even change the plain meaning of a statute in order to uphold a more basic, if unwritten, principle of justice, which,under, the plain meaning of Judge Gorsuch’s remark (“It is the role of judges to apply, not alter, the work of the people’s representatives”) would have to be regarded as an abuse of judicial discretion. The absurdity of what I take to be Gorsuch’s position is beautifully illustrated by the case of Riggs v. Palmer which the late — and truly great — Ronald Dworkin discussed in his magnificent article “Is Law a System of Rules?” aka “The Model of Rules.” Here is the one paragraph in which Dworkin uses the Riggs case to show that judges apply not just specific legal rules (e.g., statutory rules), but also deeper principles that govern how those rules should be applied.

My immediate purpose, however, is to distinguish principles in the generic sense from rules, and I shall start by collecting some examples of the former. The examples I offer are chosen haphazardly; almost any case in a law school casebook would provide examples that would serve as well. In 1889, a New York court, in the famous case of Riggs v. Palmer, had to decide whether an heir named in the will of his grandfather could inherit under that will, even though he had murdered his grandfather to do so. The court began its reasoning with this admission: “It is quite true that statues regulating the making, proof and effect of wills, and the devolution of property, if literally construed [my emphasis], and if their force and effect can in no way and under no circumstances be controlled or modified, give this property to the murderer.” But the court continued to note that “all laws as well as all contracts may be controlled in their operation and effect by general, fundamental maxims of the common law. No one shall be permitted to profit by his own fraud, or to take advantage of his own wrong, or to found any claim upon his own iniquity, or to acquire property by his own crime.” The murderer did not receive his inheritance.

QED. In this case the Common law overruled the statute, and justice prevailed over injustice. Game, set, match to the judge!

Justice Scalia and the Original Meaning of Originalism

humpty_dumpty

(I almost regret writing this post because it took a lot longer to write than I expected and I am afraid that I have ventured too deeply into unfamiliar territory. But having expended so much time and effort on this post, I must admit to being curious about what people will think of it.)

I resist the temptation to comment on Justice Scalia’s character beyond one observation: a steady stream of irate outbursts may have secured his status as a right-wing icon and burnished his reputation as a minor literary stylist, but his eruptions brought no credit to him or to the honorable Court on which he served.

But I will comment at greater length on the judicial philosophy, originalism, which he espoused so tirelessly. The first point to make, in discussing originalism, is that there are at least two concepts of originalism that have been advanced. The first and older concept is that the provisions of the US Constitution should be understood and interpreted as the framers of the Constitution intended those provisions to be understood and interpreted. The task of the judge, in interpreting the Constitution, would then be to reconstruct the collective or shared state of mind of the framers and, having ascertained that state of mind, to interpret the provisions of the Constitution in accord with that collective or shared state of mind.

A favorite originalist example is the “cruel and unusual punishment” provision of the Eighth Amendment to the Constitution. Originalists dismiss all arguments that capital punishment is cruel and unusual, because the authors of the Eighth Amendment could not have believed capital punishment to be cruel and unusual. If that’s what they believed then, why, having passed the Eighth amendment, did the first Congress proceed to impose the death penalty for treason, counterfeiting and other offenses in 1790? So it seems obvious that the authors of Eighth Amendment did not intend to ban capital punishment. If so, originalists argue, the “cruel and unusual” provision of the Eighth Amendment can provide no ground for ruling that capital punishment violates the Eighth Amendment.

There are a lot of problems with the original-intent version of originalism, the most obvious being the impossibility of attributing an unambiguous intention to the 50 or so delegates to the Constitutional Convention who signed the final document. The Constitutional text that emerged from the Convention was a compromise among many competing views and interests, and it did not necessarily conform to the intentions of any of the delegates, much less all of them. True, James Madison was the acknowledged author of the Bill of Rights, so if we are parsing the Eighth Amendment, we might, in theory, focus exclusively on what he understood the Eighth Amendment to mean. But focusing on Madison alone would be problematic, because Madison actually opposed adding a Bill of Rights to the original Constitution; Madison introduced the Bill of Rights as amendments to the Constitution in the first Congress, only because the Constitution would not have been approved without an understanding that the Bill of Rights that Madison had opposed would be adopted as amendments to the Constitution. The inherent ambiguity in the notion of intention, even in the case of a single individual acting out of mixed, if not conflicting, motives – an ambiguity compounded when action is undertaken collectively by individuals – causes the notion of original intent to dissolve into nothingness when one tries to apply it in practice.

Realizing that trying to determine the original intent of the authors of the Constitution (including the Amendments thereto) is a fool’s errand, many originalists, including Justice Scalia, tried to salvage the doctrine by shifting its focus from the inscrutable intent of the Framers to the objective meaning that a reasonable person would have attached to the provisions of the Constitution when it was ratified. Because the provisions of the Constitution are either ordinary words or legal terms, the meaning that would reasonably have been attached to those provisions can supposedly be ascertained by consulting the contemporary sources, either dictionaries or legal treatises, in which those words or terms were defined. It is this original meaning that, according to Scalia, must remain forever inviolable, because to change the meaning of provisions of the Constitution would allow unelected judges to covertly amend the Constitution, evading the amendment process spelled out in Article V of the Constitution, thereby nullifying the principle of a written constitution that constrains the authority and powers of all branches of government. Instead of being limited by the Constitution, judges not bound by the original meaning arrogate to themselves an unchecked power to impose their own values on the rest of the country.

To return to the Eighth Amendment, Scalia would say that the meaning attached to the term “cruel and unusual” when the Eighth Amendment was passed was clearly not so broad that it prohibited capital punishment. Otherwise, how could Congress, having voted to adopt the Eighth Amendment, proceed to make counterfeiting and treason and several other federal offenses capital crimes? Of course that’s a weak argument, because Congress, like any other representative assembly is under no obligation or constraint to act consistently. It’s well known that democratic decision-making need not be consistent, and just because a general principle is accepted doesn’t mean that the principle will not be violated in specific cases. A written Constitution is supposed to impose some discipline on democratic decision-making for just that reason. But there was no mechanism in place to prevent such inconsistency, judicial review of Congressional enactments not having become part of the Constitutional fabric until John Marshall’s 1803 opinion in Marbury v. Madison made judicial review, quite contrary to the intention of many of the Framers, an organic part of the American system of governance.

Indeed, in 1798, less than ten years after the Bill of Rights was adopted, Congress enacted the Alien and Sedition Acts, which, I am sure even Justice Scalia would have acknowledged, violated the First Amendment prohibition against abridging the freedom of speech and the press. To be sure, the Congress that passed the Alien and Sedition Acts was not the same Congress that passed the Bill of Rights, but one would hardly think that the original meaning of abridging freedom of speech and the press had been forgotten in the intervening decade. Nevertheless, to uphold his version of originalism, Justice Scalia would have to argue either that the original meaning of the First Amendment had been forgotten or acknowledge that one can’t simply infer from the actions of a contemporaneous or nearly contemporaneous Congress what the original meaning of the provisions of the Constitution were, because it is clearly possible that the actions of Congress could have been contrary to some supposed original meaning of the provisions of the Constitution.

Be that as it may, for purposes of the following discussion, I will stipulate that we can ascertain an objective meaning that a reasonable person would have attached to the provisions of the Constitution at the time it was ratified. What I want to examine is Scalia’s idea that it is an abuse of judicial discretion for a judge to assign a meaning to any Constitutional term or provision that is different from that original meaning. To show what is wrong with Scalia’s doctrine, I must first explain that Scalia’s doctrine is based on legal philosophy known as legal positivism. Whether Scalia realized that he was a legal positivist I don’t know, but it’s clear that Scalia was taking the view that the validity and legitimacy of a law or a legal provision or a legal decision (including a Constitutional provision or decision) derives from an authority empowered to make law, and that no one other than an authorized law-maker or sovereign is empowered to make law.

According to legal positivism, all law, including Constitutional law, is understood as an exercise of will – a command. What distinguishes a legal command from, say, a mugger’s command to a victim to turn over his wallet is that the mugger is not a sovereign. Not only does the sovereign get what he wants, the sovereign, by definition, gets it legally; we are not only forced — compelled — to obey, but, to add insult to injury, we are legally obligated to obey. And morality has nothing to do with law or legal obligation. That’s the philosophical basis of legal positivism to which Scalia, wittingly or unwittingly, subscribed.

Luckily for us, we Americans live in a country in which the people are sovereign, but the power of the people to exercise their will collectively was delimited and circumscribed by the Constitution ratified in 1788. Under positivist doctrine, the sovereign people in creating the government of the United States of America laid down a system of rules whereby the valid and authoritative expressions of the will of the people would be given the force of law and would be carried out accordingly. The rule by which the legally valid, authoritative, command of the sovereign can be distinguished from the command of a mere thug or bully is what the legal philosopher H. L. A. Hart called a rule of recognition. In the originalist view, the rule of recognition requires that any judicial judgment accord with the presumed original understanding of the provisions of the Constitution when the Constitution was ratified, thereby becoming the authoritative expression of the sovereign will of the people, unless that original understanding has subsequently been altered by way of the amendment process spelled out in Article V of the Constitution. What Scalia and other originalists are saying is that any interpretation of a provision of the Constitution that conflicts with the original meaning of that provision violates the rule of recognition and is therefore illegitimate. Hence, Scalia’s simmering anger at decisions of the court that he regarded as illegitimate departures from the original meaning of the Constitution.

But legal positivism is not the only theory of law. F. A. Hayek, who, despite his good manners, somehow became a conservative and libertarian icon a generation before Scalia, subjected legal positivism to withering criticism in volume one of Law Legislation and Liberty. But the classic critique of legal positivism was written a little over a half century ago by Ronald Dworkin, in his essay “Is Law a System of Rules?” (aka “The Model of Rules“) Dworkin’s main argument was that no system of rules can be sufficiently explicit and detailed to cover all possible fact patterns that would have to be adjudicated by a judge. Legal positivists view the exercise of discretion by judges as an exercise of personal will authorized by the Sovereign in cases in which no legal rule exactly fits the facts of a case. Dworkin argued that rather than an imposition of judicial will authorized by the sovereign, the exercise of judicial discretion is an application of the deeper principles relevant to the case, thereby allowing the judge to determine which, among the many possible rules that could be applied to the facts of the case, best fits with the totality of the circumstances, including prior judicial decisions, the judge must take into account. According to Dworkin, law and the legal system as a whole is not an expression of sovereign will, but a continuing articulation of principles in terms of which specific rules of law must be understood, interpreted, and applied.

The meaning of a legal or Constitutional provision can’t be fixed at a single moment, because, like all social institutions, meaning evolves and develops organically. Not being an expression of the sovereign will, the meaning of a legal term or provision cannot be identified by a putative rule of recognition – e.g., the original meaning doctrine — that freezes the meaning of the term at a particular moment in time. It is not true, as Scalia and originalists argue, that conceding that the meaning of Constitutional terms and provisions can change and evolve allows unelected judges to substitute their will for the sovereign will enshrined when the Constitution was ratified. When a judge acknowledges that the meaning of a term has changed, the judge does so because that new meaning has already been foreshadowed in earlier cases with which his decision in the case at hand must comport. There is always a danger that the reasoning of a judge is faulty, but faulty reasoning can beset judges claiming to apply the original meaning of a term, as Chief Justice Taney did in his infamous Dred Scot opinion in which Taney argued that the original meaning of the term “property” included property in human beings.

Here is an example of how a change in meaning may be required by a change in our understanding of a concept. It may not be the best example to shed light on the legal issues, but it is the one that occurs to me as I write this. About a hundred years ago, Bertrand Russell and Alfred North Whitehead were writing one the great philosophical works of the twentieth century, Principia Mathematica. Their objective was to prove that all of mathematics could be reduced to pure logic. It was a grand and heroic effort that they undertook, and their work will remain a milestone in history of philosophy. If Russell and Whitehead had succeeded in their effort of reducing mathematics to logic, it could properly be said that mathematics is really the same as logic, and the meaning of the word “mathematics” would be no different from the meaning of the word “logic.” But if the meaning of mathematics were indeed the same as that of logic, it would not be the result of Russell and Whitehead having willed “mathematics” and “logic” to mean the same thing, Russell and Whitehead being possessed of no sovereign power to determine the meaning of “mathematics.” Whether mathematics is really the same as logic depends on whether all of mathematics can be logically deduced from a set of axioms. No matter how much Russell and Whitehead wanted mathematics to be reducible to logic, the factual question of whether mathematics can be reduced to logic has an answer, and the answer is completely independent of what Russell and Whitehead wanted it to be.

Unfortunately for Russell and Whitehead, the Viennese mathematician Kurt Gödel came along a few years after they completed the third and final volume of their masterpiece and proved an “incompleteness theorem” showing that mathematics could not be reduced to logic – mathematics is therefore not the same as logic – because in any axiomatized system, some true propositions of arithmetic will be logically unprovable. The meaning of mathematics is therefore demonstrably not the same as the meaning of logic. This difference in meaning had to be discovered; it could not be willed.

Actually, it was Humpty Dumpty who famously anticipated the originalist theory that meaning is conferred by an act of will.

“I don’t know what you mean by ‘glory,’ ” Alice said.
Humpty Dumpty smiled contemptuously. “Of course you don’t—till I tell you. I meant ‘there’s a nice knock-down argument for you!’ ”
“But ‘glory’ doesn’t mean ‘a nice knock-down argument’,” Alice objected.
“When I use a word,” Humpty Dumpty said, in rather a scornful tone, “it means just what I choose it to meanan—neither more nor less.”
“The question is,” said Alice, “whether you can make words mean so many different things.”
“The question is,” said Humpty Dumpty, “which is to be master—that’s all.”

In Humpty Dumpty’s doctrine, meaning is determined by a sovereign master. In originalist doctrine, the sovereign master is the presumed will of the people when the Constitution and the subsequent Amendments were ratified.

So the question whether capital punishment is “cruel and unusual” can’t be answered, as Scalia insisted, simply by invoking a rule of recognition that freezes the meaning of “cruel and unusual” at the presumed meaning it had in 1790, because the point of a rule of recognition is to identify the sovereign will that is given the force of law, while the meaning of “cruel and unusual” does not depend on anyone’s will. If a judge reaches a decision based on a meaning of “cruel and unusual” different from the supposed original meaning, the judge is not abusing his discretion, the judge is engaged in judicial reasoning. The reasoning may be good or bad, right or wrong, but judicial reasoning is not rendered illegitimate just because it assigns a meaning to a term different from the supposed original meaning. The test of judicial reasoning is how well it accords with the totality of judicial opinions and relevant principles from which the judge can draw in supporting his reasoning. Invoking a supposed original meaning of what “cruel and unusual” meant to Americans in 1789 does not tell us how to understand the meaning of “cruel and unusual” just as the question whether logic and mathematics are synonymous cannot be answered by insisting that Russel and Whitehead were right in thinking that mathematics and logic are the same thing. (I note for the record that I personally have no opinion about whether capital punishment violates the Eighth Amendment.)

One reason meanings change is because circumstances change. The meaning of freedom of the press and freedom of speech may have been perfectly clear in 1789, but our conception of what is protected by the First Amendment has certainly expanded since the First Amendment was ratified. As new media for conveying speech have been introduced, the courts have brought those media under the protection of the First Amendment. Scalia made a big deal of joining with the majority in Texas v. Johnson a 1989 case in which the conviction of a flag burner was overturned. Scalia liked to cite that case as proof of his fidelity to the text of the Constitution; while pouring scorn on the flag burner, Scalia announced that despite his righteous desire to exact a terrible retribution from the bearded weirdo who burned the flag, he had no choice but to follow – heroically, in his estimation — the text of the Constitution.

But flag-burning is certainly a form of symbolic expression, and it is far from obvious that the original meaning of the First Amendment included symbolic expression. To be sure some forms of symbolic speech were recognized as speech in the eighteenth century, but it could be argued that the original meaning of freedom of speech and the press in the First Amendment was understood narrowly. The compelling reason for affording flag-burning First Amendment protection is not that flag-burning was covered by the original meaning of the First Amendment, but that a line of cases has gradually expanded the notion of what activities are included under what the First Amendment calls “speech.” That is the normal process by which law changes and meanings change, incremental adjustments taking into account unforeseen circumstances, eventually leading judges to expand the meanings ascribed to old terms, because the expanded meanings comport better with an accumulation of precedents and the relevant principles on which judges have relied in earlier cases.

But perhaps the best example of how changes in meaning emerge organically from our efforts to cope with changing and unforeseen circumstances rather than being the willful impositions of a higher authority is provided by originalism itself, because, “originalism” was originally about the original intention of the Framers of the Constitution. It was only when it became widely accepted that the original intention of the Framers was not something that could be ascertained, that people like Antonin Scalia decided to change the meaning of “originalism,” so that it was no longer about the original intention of the Framers, but about the original meaning of the Constitution when it was ratified. So what we have here is a perfect example of how the meaning of a well-understood term came to be changed, because the original meaning of the term was found to be problematic. And who was responsible for this change in meaning? Why the very same people who insist that it is forbidden to tamper with the original meaning of the terms and provisions of the Constitution. But they had no problem in changing the meaning of their doctrine of Constitutional interpretation. Do I blame them for changing the meaning of the originalist doctrine? Not one bit. But if originalists were only marginally more introspective than they seem to be, they might have realized that changes in meaning are perfectly normal and legitimate, especially when trying to give concrete meaning to abstract terms in a way that best fits in with the entire tradition of judicial interpretation embodied in the totality of all previous judicial decisions. That is the true task of a judge, not a pointless quest for original meaning.

Cluelessness about Strategy, Tactics and Discretion

In his op-ed in the weekend Wall Street Journal, John Taylor restates his confused opposition to what Ben Bernanke calls the policy of constrained discretion followed by the Federal Reserve during his tenure at the Fed, as vice-chairman under Alan Greenspan from 2003 to 2005 and as Chairman from 2005 to 2013. Taylor has been arguing for the Fed to adopt what he calls the “rules-based monetary policy” supposedly practiced by the Fed while Paul Volcker was chairman (at least from 1981 onwards) and for most of Alan Greenspan’s tenure until 2003 when, according to Taylor, the Fed abandoned the “rules-based monetary rule” that it had followed since 1981. In a recent post, I explained why Taylor’s description of Fed policy under Volcker was historically inaccurate and why his critique of recent Fed policy is both historically inaccurate and conceptually incoherent.

Taylor denies that his steady refrain calling for a “rules-based policy” (i.e., the implementation of some version of his beloved Taylor Rule) is intended “to chain the Fed to an algebraic formula;” he just thinks that the Fed needs “an explicit strategy for setting the instruments” of monetary policy. Now I agree that one ought not to set a policy goal without a strategy for achieving the goal, but Taylor is saying that he wants to go far beyond a strategy for achieving a policy goal; he wants a strategy for setting instruments of monetary policy, which seems like an obvious confusion between strategy and tactics, ends and means.

Instruments are the means by which a policy is implemented. Setting a policy goal can be considered a strategic decision; setting a policy instrument a tactical decision. But Taylor is saying that the Fed should have a strategy for setting the instruments with which it implements its strategic policy.  (OED, “instrument – 1. A thing used in or for performing an action: a means. . . . 5. A tool, an implement, esp. one used for delicate or scientific work.”) This is very confused.

Let’s be very specific. The Fed, for better or for worse – I think for worse — has made a strategic decision to set a 2% inflation target. Taylor does not say whether he supports the 2% target; his criticism is that the Fed is not setting the instrument – the Fed Funds rate – that it uses to hit the 2% target in accordance with the Taylor rule. He regards the failure to set the Fed Funds rate in accordance with the Taylor rule as a departure from a rules-based policy. But the Fed has continually undershot its 2% inflation target for the past three years. So the question naturally arises: if the Fed had raised the Fed Funds rate to the level prescribed by the Taylor rule, would the Fed have succeeded in hitting its inflation target? If Taylor thinks that a higher Fed Funds rate than has prevailed since 2012 would have led to higher inflation than we experienced, then there is something very wrong with the Taylor rule, because, under the Taylor rule, the Fed Funds rate is positively related to the difference between the actual inflation rate and the target rate. If a Fed Funds rate higher than the rate set for the past three years would have led, as the Taylor rule implies, to lower inflation than we experienced, following the Taylor rule would have meant disregarding the Fed’s own inflation target. How is that consistent with a rules-based policy?

It is worth noting that the practice of defining a rule in terms of a policy instrument rather than in terms of a policy goal did not originate with John Taylor; it goes back to Milton Friedman who somehow convinced a generation of monetary economists that the optimal policy for the Fed would be to target the rate of growth of the money supply at a k-percent annual rate. I have devoted other posts to explaining the absurdity of Friedman’s rule, but the point that I want to emphasize now is that Friedman, for complicated reasons which I think (but am not sure) that I understand, convinced himself that (classical) liberal principles require that governments and government agencies exercise their powers only in accordance with explicit and general rules that preclude or minimize the exercise of discretion by the relevant authorities.

Friedman’s confusions about his k-percent rule were deep and comprehensive, as a quick perusal of Friedman’s chapter 3 in Capitalism and Freedom, “The Control of Money,” amply demonstrates. In practice, the historical gold standard was a mixture of gold coins and privately issued banknotes and deposits as well as government banknotes that did not function particularly well, requiring frequent and significant government intervention. Unlike, a pure gold currency in which, given the high cost of extracting gold from the ground, the quantity of gold money would change only gradually, a mixed system of gold coin and banknotes and deposits was subject to large and destabilizing fluctuations in quantity. So, in Friedman’s estimation, the liberal solution was to design a monetary system such that the quantity of money would expand at a slow and steady rate, providing the best of all possible worlds: the stability of a pure gold standard and the minimal resource cost of a paper currency. In making this argument, as I have shown in an earlier post, Friedman displayed a basic misunderstanding of what constituted the gold standard as it was historically practiced, especially during its heyday from about 1880 to the outbreak of World War I, believing that the crucial characteristic of the gold standard was the limitation that it imposed on the quantity of money, when in fact the key characteristic of the gold standard is that it forces the value of money – regardless of its material content — to be equal to the value of a specified quantity of gold. (This misunderstanding – the focus on control of the quantity of money as the key task of monetary policy — led to Friedman’s policy instrumentalism – i.e., setting a policy rule in terms of the quantity of money.)

Because Friedman wanted to convince his friends in the Mont Pelerin Society (his egregious paper “Real and Pseudo Gold Standards” was originally presented at a meeting of the Mont Pelerin Society), who largely favored the gold standard, that (classical) liberal principles did not necessarily entail restoration of the gold standard, he emphasized a distinction between what he called the objectives of monetary policy and the instruments of monetary policy. In fact, in the classical discussion of the issue by Friedman’s teacher at Chicago, Henry Simons, in an essay called “Rules versus Authorities in Monetary Policy,” Simons also tried to formulate a rule that would be entirely automatic, operating insofar as possible in a mechanical fashion, even considering the option of stabilizing the quantity of money. But Simons correctly understood that any operational definition of money is necessarily arbitrary, meaning that there will always be a bright line between what is money under the definition and what is not money, even though the practical difference between what is on one side of the line and what is on the other will be slight. Thus, the existence of near-moneys would make control of any monetary aggregate a futile exercise. Simons therefore defined a monetary rule in terms of an objective of monetary policy: stabilizing the price level. Friedman did not want to settle for such a rule, because he understood that stabilizing the price level has its own ambiguities, there being many ways to measure the price level as well as theoretical problems in constructing index numbers (the composition and weights assigned to components of the index being subject to constant change) that make any price index inexact. Given Friedman’s objective — demonstrating that there is a preferable alternative to the gold standard evaluated in terms of (classical) liberal principles – a price-level rule lacked the automatism that Friedman felt was necessary to trump the gold standard as a monetary rule.

Friedman therefore made his case for a monetary rule in terms of the quantity of money, ignoring Simons powerful arguments against trying to control the quantity of money, stating the rule in general terms and treating the selection of an operational definition of money as a mere detail. Here is how Friedman put it:

If a rule is to be legislated, what rule should it be? The rule that has most frequently been suggested by people of a generally liberal persuasion is a price level rule; namely, a legislative directive to the monetary authorities that they maintain a stable price level. I think this is the wrong kind of a rule [my emphasis]. It is the wrong kind of a rule because it is in terms of objectives that the monetary authorities do not have the clear and direct power to achieve by their own actions. It consequently raises the problem of dispersing responsibilities and leaving the authorities too much leeway.

As an aside, I note that Friedman provided no explanation of why such a rule would disperse responsibilities. Who besides the monetary authority did Friedman think would have responsibility for controlling the price level under such a rule? Whether such a rule would give the monetary authorities “too much leeway” is of course an entirely different question.

There is unquestionably a close connection between monetary actions and the price level. But the connection is not so close, so invariable, or so direct that the objective of achieving a stable price level is an appropriate guide to the day-to-day activities of the authorities. (p. 53)

Friedman continues:

In the present state of our knowledge, it seems to me desirable to state the rule in terms of the behavior of the stock of money. My choice at the moment would be a legislated rule instructing the monetary authority to achieve a specified rate of growth in the stock of money. For this purpose, I would define the stock of money as including currency outside commercial banks plus all deposits of commercial banks. I would specify that the Reserve System shall see to it [Friedman’s being really specific there, isn’t he?] that the total stock of money so defined rises month by month, and indeed, so far as possible day by day, at an annual rate of X per cent, where X is some number between 3 and 5. (p. 54)

Friedman, of course, deliberately ignored, or, more likely, simply did not understand, that the quantity of deposits created by the banking system, under whatever definition, is no more under the control of the Fed than the price level. So the whole premise of Friedman’s money supply rule – that it was formulated in terms of an instrument under the immediate control of the monetary authority — was based on the fallacy that quantity of money is an instrument that the monetary authority is able to control at will.

I therefore note, as a further aside, that in his latest Wall Street Journal op-ed, Taylor responded to Bernanke’s observation that the Taylor rule becomes inoperative when the rule implies an interest-rate target below zero. Taylor disagrees:

The zero bound is not a new problem. Policy rule design research took that into account decades ago. The default was to move to a stable money growth regime not to massive asset purchases.

Taylor may regard the stable money growth regime as an acceptable default rule when the Taylor rule is sidelined at the zero lower bound. But if so, he is caught in a trap of his own making, because, whether he admits it or not, the quantity of money, unlike the Fed Funds rate, is not an instrument under the direct control of the Fed. If Taylor rejects an inflation target as a monetary rule, because it grants too much discretion to the monetary authority, then he must also reject a stable money growth rule, because it allows at least as much discretion as does an inflation target. Indeed, if the past 35 years have shown us anything it is that the Fed has much more control over the price level and the rate of inflation than it has over the quantity of money, however defined.

This post is already too long, but I think that it’s important to say something about discretion, which was such a bugaboo for Friedman, and remains one for Taylor. But the concept of discretion is not as simple as it is often made out to be, especially by Friedman and Taylor, and if you are careful to pay attention to what the word means in ordinary usage, you will see that discretion does not necessarily, or usually, refer to an unchecked authority to act as one pleases. Rather it suggests that a certain authority to make a decision is being granted to a person or an official, but the decision is to be made in light of certain criteria or principles that, while not fully explicit, still inform and constrain the decision.

The best analysis of what is meant by discretion that I know of is by Ronald Dworkin in his classic essay “Is Law a System of Rules?” Dworkin discusses the meaning of discretion in the context of a judge deciding a “hard case,” a case in which conflicting rules of law seem to be applicable, or a case in which none of the relevant rules seems to fit the facts of the case. Such a judge is said to exercise discretion, because his decision is not straightforwardly determined by the existing set of legal rules. Legal positivists, against whom Dworkin was arguing, would say that the judge is able, and called upon, to exercise his discretion in deciding the case, meaning, that by deciding the case, the judge is simply imposing his will. It is something like the positivist view that underlies Friedman’s intolerance for discretion.

Countering the positivist view, Dworkin considers the example of a sergeant ordered by his lieutenant to take his five most experienced soldiers on patrol, and reflects on how to interpret an observer’s statement about the orders: “the orders left the sergeant a great deal of discretion.” It is clear that, in carrying out his orders, the sergeant is called upon to exercise his judgment, because he is not given a metric for measuring the experience of his soldiers. But that does not mean that when he chooses five soldiers to go on patrol, he is engaging in an exercise of will. The decision can be carried out with good judgment or with bad judgment, but it is an exercise of judgment, not will, just as a judge, in deciding a hard case, is exercising his judgment, on a more sophisticated level to be sure than the sergeant choosing soldiers, not just indulging his preferences.

If the Fed is committed to an inflation target, then, by choosing a setting for its instrumental target, the Fed Funds rate, the Fed is exercising judgment in light of its policy goals. That exercise of judgment in pursuit of a policy goal is very different from the arbitrary behavior of the Fed in the 1970s when its decisions were taken with no clear price-level or inflation target and with no clear responsibility for hitting the target.

Ben Bernanke has described the monetary regime in which the Fed’s decisions are governed by an explicit inflation target and a subordinate commitment to full employment as one of “constrained discretion.” When using this term, Taylor always encloses it in quotations markets, apparently to suggest that the term is an oxymoron. But that is yet another mistake; “constrained discretion” is no oxymoron. Indeed, it is a pleonasm, the exercise of discretion usually being understood to mean not an unconstrained exercise of will, but an exercise of judgment in the light of relevant goals, policies, and principles.

PS I apologize for not having responded to comments recently. I will try to catch up later this week.

Ronald Dworkin, RIP

I never met Ronald Dworkin, and I have not studied his work on legal philosophy carefully, but one essay that he wrote many years ago made a deep impression on me when I read it over 40 years ago as an undergraduate, and I still consider it just about the most profound discussion of law that I ever read. The essay, “Is Law a System of Rules?” (reprinted in The Philosophy of Law)  is a refutation of the philosophy of legal positivism, which holds that law is simply the command of a duly authorized sovereign law giver, an idea that was powerfully articulated by Thomas Hobbes and later by Jeremy Bentham.

Legal positivism was developed largely in reaction to theories of natural law, reflected in the work of legal philosophers like Hugo Grotius and Samuel Pufendorf, and in William Blackstone’s famous Commentaries on the Laws of England. The validity of law and the obligation to obey law were derived from the correspondence, even if only imperfect, of positive law to natural law. Blackstone’s Commentaries were largely a form of apologetics aimed at showing how well English law corresponded to the natural law. Jeremy Bentham would have none of this, calling “natural rights” (i.e., the rights derived from natural law) simple nonsense, and “natural and imprescriptible rights” nonsense on stilts.

Legal positivism was first given a systematic exposition by Bentham’s younger contemporary, John Austin, who described law as those commands of a sovereign for which one would be punished if one failed to obey them, the sovereign being he who is habitually obeyed. The twentieth century legal philosopher H. L. A. Hart further refined the doctrine in a definitive treatise, The Concept of Law, in which he argued that law must have a systematic and non-arbitrary structure. Laws are more than commands, but they remain disconnected from any moral principles. Law is not just a set of commands; it is a system of rules, but the rules have no necessary moral content.

As a Rhodes Scholar, Dworkin studied under Hart at Oxford, but he rejected Hart’s view of law. In his paper “Is Law a System of Rules?” Dworkin subjected legal positivism, in the sophisticated version (law as a system of rules) articulated by Hart, to a searching philosophical analysis. When I read Dworkin’s essay, I had already read Hayek’s great work, The Constitution of Liberty, and, while Hayek was visiting UCLA in the 1968-69 academic year, the first draft of his Law, Legislation and Liberty. In both of these works, Hayek had also criticized legal positivism, which he viewed as diametrically opposed to his cherished ideal of the rule of law as a necessary condition of liberty. But his criticism seemed to me not nearly as effective or as interesting as Dworkin’s. Despite disagreeing with Dworkin on a lot of issues, I have, ever since, admired Dworkin as a pre-eminent legal and political philosopher.

Dworkin’s main criticism of the theory that law is a system of rules was that the theory cannot account for the role played by legal principles in informing and guiding judges in deciding actual cases whose outcome is not obvious. Here is how Dworkin, in his essay, described the role of one such principle.

In 1889 a New York court, in the famous case of Riggs v. Palmer had to decide whether an heir named in the will of his grandfather could inherit under that will, even though he had murdered his grandfather to do so. The court began its reasoning with this admission: “It is quite true that statutes regulating the making, proof and effect of wills, and the devolution of property, if literally construed, and if their force and effect can in no way and under no circumstances be controlled or modified, give this property to the murderer.” But the court continued in to note that “all laws as well as all contracts may be controlled in their operation and effect by general, fundamental maxims of the common law. No one shall be permitted to profit by his own fraud, or to take advantage of his own wrong, or to found any claim upon his own iniquity, or acquire property by his own crime.” The murder did not receive his inheritance.

From here Dworkin went on to conduct a rigorous philosophical analysis of the way in which the principle that no one may profit from his own wrong could be understood within the conceptual framework of legal positivism that law is nothing more than a system of rules. In fact, Dworkin argued, rules cannot be applied in a vacuum, there must be principles and standards that provide judges with the resources by which to arrive at judicial decisions in cases where there is not an exact match between the given facts and an applicable rule, cases in which, in the terminology of legal positivism, judges must exercise discretion, as if discretion meant no more than freedom to reach an arbitrary unprincipled decision. Principles govern judicial decisions, but not in the same way that rules do. Rules are binary, on or off; principles are flexible, they have weight, their application requires judgment.

If we take baseball rules as a model, we find that rules of law, like the rule that a will is invalid unless signed by three witnesses, fit the model well. If the requirement of three witnesses is a valid legal rule, then it cannot be that a will has signed by only two witnesses and is valid. . . .

But this is not the way the sample principles in the quotations operated. Even those which look most like rules do not set out legal consequences that follow automatically when the conditions provided are met. We say that our law respects the principle that no man may profit from his own wrong, but we do not mean that the law never permits a man to profit from wrongs he commits. In fact, people most often profit, perfectly legally, from their legal wrongs. . . .

We do not treat these . . . counter-instances . . . as showing that the principle about profiting from one’s own wrongs is not a principle of our legal system, or that it is incomplete and needs qualifying exceptions. We not treat counter-instances as exceptions (at least not exceptions in the way in which a catcher’s dropping the third strike is an exception) because we could not hope to capture these counter-instances simply by a more extended statement of the principle. . . . Listing some of these might sharpen our sense of the principle’s weight, but it would not make for a more accurate or complete statement of the principle. . . .

All that is meant, when we way that a particular principle is a principle of our law, is that the principle is one which officials must take into account, if it is relevant, as a consideration inclining in one direction or another.

Just as an aside, I will observe that this passage and others in Dworkin’s essay make it clear that when Chief Justice Roberts appeared before the Senate Judiciary Committee in 2005 and stated that in his view the job of a judge is calling balls and strikes but not pitching or batting, he was using a distinctly inappropriate, and perhaps misleading, metaphor to describe what it is that a judge, especially an appellate judge, is called upon to do. See Dworkin’s essay on the Roberts hearing in the New York Review of Books.

Although I never met Dworkin, I did correspond with him on a few occasions, once many years ago and more recently exchanging emails with him about various issues — the last time when I sent him a link to this post commenting on the oral argument before the Supreme Court about the Affordable Health Care Act. His responses to me were always cordial and unfailingly polite; I now regret not having saved the letters and the emails. Here are links to obituaries in the New York Times, The Guardian and The Financial Times.


About Me

David Glasner
Washington, DC

I am an economist in the Washington DC area. My research and writing has been mostly on monetary economics and policy and the history of economics. In my book Free Banking and Monetary Reform, I argued for a non-Monetarist non-Keynesian approach to monetary policy, based on a theory of a competitive supply of money. Over the years, I have become increasingly impressed by the similarities between my approach and that of R. G. Hawtrey and hope to bring Hawtrey’s unduly neglected contributions to the attention of a wider audience.

My new book Studies in the History of Monetary Theory: Controversies and Clarifications has been published by Palgrave Macmillan

Follow me on Twitter @david_glasner

Archives

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 3,261 other subscribers
Follow Uneasy Money on WordPress.com