Posts Tagged 'methodology'

Two Cheers (Well, Maybe Only One and a Half) for Falsificationism

Noah Smith recently wrote a defense (sort of) of falsificationism in response to Sean Carroll’s suggestion that the time has come for scientists to throw falisficationism overboard as a guide for scientific practice. While Noah isn’t ready to throw out falsification as a scientific ideal, he does acknowledge that not everything that scientists do is really falsifiable.

But, as Carroll himself seems to understand in arguing against falsificationism, even though a particular concept or entity may itself be unobservable (and thus unfalsifiable), the larger theory of which it is a part may still have implications that are falsifiable. This is the case in economics. A utility function or a preference ordering is not observable, but by imposing certain conditions on that utility function, one can derive some (weakly) testable implications. This is exactly what Karl Popper, who introduced and popularized the idea of falsificationism, meant when he said that the aim of science is to explain the known by the unknown. To posit an unobservable utility function or an unobservable string is not necessarily to engage in purely metaphysical speculation, but to do exactly what scientists have always done, to propose explanations that would somehow account for some problematic phenomenon that they had already observed. The explanations always (or at least frequently) involve positing something unobservable (e.g., gravitation) whose existence can only be indirectly perceived by comparing the implications (predictions) inferred from the existence of the unobservable entity with what we can actually observe. Here’s how Popper once put it:

Science is valued for its liberalizing influence as one of the greatest of the forces that make for human freedom.

According to the view of science which I am trying to defend here, this is due to the fact that scientists have dared (since Thales, Democritus, Plato’s Timaeus, and Aristarchus) to create myths, or conjectures, or theories, which are in striking contrast to the everyday world of common experience, yet able to explain some aspects of this world of common experience. Galileo pays homage to Aristarchus and Copernicus precisely because they dared to go beyond this known world of our senses: “I cannot,” he writes, “express strongly enough my unbounded admiration for the greatness of mind of these men who conceived [the heliocentric system] and held it to be true […], in violent opposition to the evidence of their own senses.” This is Galileo’s testimony to the liberalizing force of science. Such theories would be important even if they were no more than exercises for our imagination. But they are more than this, as can be seen from the fact that we submit them to severe tests by trying to deduce from them some of the regularities of the known world of common experience by trying to explain these regularities. And these attempts to explain the known by the unknown (as I have described them elsewhere) have immeasurably extended the realm of the known. They have added to the facts of our everyday world the invisible air, the antipodes, the circulation of the blood, the worlds of the telescope and the microscope, of electricity, and of tracer atoms showing us in detail the movements of matter within living bodies.  All these things are far from being mere instruments: they are witness to the intellectual conquest of our world by our minds.

So I think that Sean Carroll, rather than arguing against falisficationism, is really thinking of falsificationism in the broader terms that Popper himself laid out a long time ago. And I think that Noah’s shrug-ability suggestion is also, with appropriate adjustments for changes in expository style, entirely in the spirit of Popper’s view of falsificationism. But to make that point clear, one needs to understand what motivated Popper to propose falsifiability as a criterion for distinguishing between science and non-science. Popper’s aim was to overturn logical positivism, a philosophical doctrine associated with the group of eminent philosophers who made up what was known as the Vienna Circle in the 1920s and 1930s. Building on the British empiricist tradition in science and philosophy, the logical positivists argued that our knowledge of the external world is based on sensory experience, and that apart from the tautological truths of pure logic (of which mathematics is a part) there is no other knowledge. Furthermore, no meaning could be attached to any statement whose validity could not checked either by examining its logical validity as an inference from explicit premises or verified by sensory experience. According to this criterion, much of human discourse about ethics, morals, aesthetics, religion and much of philosophy was simply meaningless, aka metaphysics.

Popper, who grew up in Vienna and was on the periphery of the Vienna Circle, rejected the idea that logical tautologies and statements potentially verifiable by observation are the only conveyors of meaning between human beings. Metaphysical statements can be meaningful even if they can’t be confirmed by observation. Metaphysical statements are meaningful if they are coherent and are not nonsensical. If there is a problem with metaphysical statements, the problem is not necessarily because they have no meaning. In making this argument, Popper suggested an alternative criterion of demarcation to that between meaning and non-meaning: a criterion of demarcation between science and metaphysics. Science is indeed different from metaphysics, but the difference is not that science is meaningful and metaphysics is not. The difference is that scientific statements can be refuted (or falsified) by observations while metaphysical statements cannot be refuted by observations. As a matter of logic, the only way to refute a proposition by an observation is for the proposition to assert that the observation was not possible. Unless you can say what observation would refute what you are saying, you are engaging in metaphysical, not scientific, talk. This gave rise to Popper’s then very surprising result. If you positively assert the existence of something – an assertion potentially verifiable by observation, and hence for logical positivists the quintessential scientific statement — you are making a metaphysical, not a scientific, statement. The statement that something (e.g., God, a string, or a utility function) exists cannot be refuted by any observation. However the unobservable phenomenon may be part of a theory with implications that could be refuted by some observation. But in that case it would be the theory not the posited object that was refuted.

In fact, Popper thought that metaphysical statements not only could be meaningful, but could even be extremely useful, coining the term “metaphysical research programs,” because a metaphysical, unfalsifiable idea or theory could be the impetus for further research, possibly becoming scientifically fruitful in the way that evolutionary biology eventually sprang from the possibly unfalsifiable idea of survival of the fittest. That sounds to me pretty much like Noah’s idea of shrug-ability.

Popper was largely successful in overthrowing logical positivism, though whether it was entirely his doing (as he liked to claim) and whether it was fully overthrown are not so clear. One reason to think that it was not all his doing is that there is still a lot of confusion about what the falsification criterion actually means. Reading Noah Smith and Sean Carroll, I almost get the impression that they think the falsification criterion distinguishes not just between science and non-science but between meaning and non-meaning. Otherwise, why would anyone think that there is any problem with introducing an unfalsifiable concept into scientific discussion. When Popper argued that science should aim at proposing and testing falsifiable theories, he meant that one should not design a theory so that it can’t be tested, or adopt stratagems — ad hoc hypotheses — that serve only to account for otherwise falsifying observations. But if someone comes up with a creative new idea, and the idea can’t be tested, at least given the current observational technology, that is not a reason to reject the theory, especially if the new theory accounts for otherwise unexplained observations.

Another manifestation of Popper’s imperfect success in overthrowing logical positivism is that Paul Samuelson in his classic The Foundations of Economic Analysis chose to call the falsifiable implications of economic theory, meaningful theorems. By naming those implications “meaningful theorems,” Samuelson clearly was operating under the positivist presumption that only a proposition that could (at least in principle) be falsified by observation was meaningful. However, that formulation reflected an untenable compromise between Popper’s criterion for distinguishing science from metaphysics and the logical positivist criterion for distinguishing meaningful from meaningless statements. Instead of referring to meaningful theorems, Samuelson should have called them, more modestly, testable or scientific theorems.

So, at least as I read Popper, Noah Smith and Sean Carroll are only discovering what Popper already understood a long time ago.

At this point, some readers may be wondering why, having said all that, I seem to have trouble giving falisficationism (and Popper) even two cheers. So I am afraid that I will have to close this post on a somewhat critical note. The problem with Popper is that his rhetoric suggests that scientific methodology is a lot more important than it really is. Apart from some egregious examples like Marxism and Freudianism, which were deliberately formulated to exclude the possibility of refutation, there really aren’t that many theories entertained by scientists that can be ruled out of order on strictly methodological grounds. Popper can occasionally provide some methodological reminders to scientists to avoid relying on ad hoc theorizing — at least when a non-ad-hoc alternative is handy — but beyond that I don’t think methodology counts for very much in the day to day work of scientists. Many theories are difficult to falsify, but the difficulty is not necessarily the result of deliberate choices by the theorists, it is the result of the nature of the problem and the nature of the evidence that could potentially refute the theory. The evidence is what it is. It is nice to come up with a theory that predicts a novel fact that can be observed, but nature is not always so accommodating to our theories.

There is a kind of rationalistic (I am using “rationalistic” in the pejorative sense of Michael Oakeshott) faith that following the methodological rules that Popper worked so hard to formulate will guarantee scientific progress. Those rules tend to encourage an unrealistic focus on making theories testable (especially in economics) when by their nature the phenomena are too complex for theories to be formulated in ways that are susceptible to decisive testing. And although Popper recognized that empirical testing of a theory has very limited usefulness unless the theory is being compared to some alternative theory, too often discussions of theory testing are in the context of testing a single theory in isolation. Kuhn and others have pointed out that science is not routinely carried out in the way that Popper suggested it should be. To some extent, Popper acknowledged the truth of that observation, though he liked to cite examples from the history of science to illustrate his thesis, but argued that he was offering a normative, not a positive, theory of scientific discovery. But why should we assume that Popper had more insight into the process of discovery for particular sciences than the practitioners of those sciences actually doing the research? That is the nub of the criticism of Popper that I take away from Oakeshott’s work. Life and any form of endeavor involves the transmission of ways of doing things, traditions, that cannot be reduced to a set of rules, but require education, training, practice and experience. That’s what Kuhn called normal science. Normal science can go off the tracks too, but it is naïve to think that a list of methodological rules is what will keep science moving constantly in the right direction. Why should Popper’s rules necessarily trump the lessons that practitioners have absorbed from the scientific traditions in which they have been trained? I don’t believe that there is any surefire recipe for scientific progress.

Nevertheless, when I look at the way economics is now being practiced and taught, I can’t help but think that a dose of Popperianism might not be the worst thing that could be administered to modern economics. But that’s a discussion for another day.

Advertisements

Microfoundations (aka Macroeconomic Reductionism) Redux

In two recent blog posts (here and here), Simon Wren-Lewis wrote sensibly about microfoundations. Though triggered by Wren-Lewis’s posts, the following comments are not intended as criticisms of him, though I think he does give microfoundations (as they are now understood) too much credit. Rather, my criticism is aimed at the way microfoundations have come to be used to restrict the kind of macroeconomic explanations and models that are up for consideration among working macroeconomists. I have written about microfoundations before on this blog (here and here)  and some, if not most, of what I am going to say may be repetitive, but obviously the misconceptions associated with what Wren-Lewis calls the “microfoundations project” are not going to be dispelled by a couple of blog posts, so a little repetitiveness may not be such a bad thing. Jim Buchanan liked to quote the following passage from Herbert Spencer’s Data of Ethics:

Hence an amount of repetition which to some will probably appear tedious. I do not, however, much regret this almost unavoidable result; for only by varied iteration can alien conceptions be forced on reluctant minds.

When the idea of providing microfoundations for macroeconomics started to catch on in the late 1960s – and probably nowhere did they catch on sooner or with more enthusiasm than at UCLA – the idea resonated, because macroeconomics, which then mainly consisted of various versions of the Keynesian model, seemed to embody certain presumptions about how markets work that contradicted the presumptions of microeconomics about how markets work. In microeconomics, the primary mechanism for achieving equilibrium is the price (actually the relative price) of whatever good is being analyzed. A full (or general) microeconomic equilibrium involves a set of prices such that each of markets (whether for final outputs or for inputs into the productive process) are in equilibrium, equilibrium meaning that every agent is able to purchase or sell as much of any output or input as desired at the equilibrium price. The set of equilibrium prices not only achieves equilibrium, the equilibrium, under some conditions, has optimal properties, because each agent, in choosing how much to buy or sell of each output or input, is presumed to be acting in a way that is optimal given the preferences of the agent and the social constraints under which the agent operates. Those optimal properties don’t always follow from microeconomic presumptions, optimality being dependent on the particular assumptions (about preferences, production and exchange technology, and property rights) adopted by the analyst in modeling an individual market or an entire system of markets.

The problem with Keynesian macroeconomics was that it seemed to overlook, or ignore, or dismiss, or deny, the possibility that a price mechanism is operating — or could operate — to achieve equilibrium in the markets for goods and for labor services. In other words, the Keynesian model seemed to be saying that a macoreconomic equilibrium is compatible with the absence of market clearing, notwithstanding that the absence of market clearing had always been viewed as the defining characteristic of disequilibrium. Thus, from the perspective of microeconomic theory, if there is an excess supply of workers offering labor services, i.e., there are unemployed workers who would be willing to be employed at the same wage that currently employed workers are receiving, there ought to be market forces that would reduce wages to a level such that all workers willing to work at that wage could gain employment. Keynes, of course, had attempted to explain why workers could only reduce their nominal wages, not their real wages, and argued that nominal wage cuts would simply induce equivalent price reductions, leaving real wages and employment unchanged. The microeconomic reasoning on which that argument was based hinged on Keynes’s assumption that nominal wage cuts would trigger proportionate price cuts, but that assumption was not exactly convincing, if only because the percentage price cut would seem to depend not just on the percentage reduction in the nominal wage, but also on the labor intensity of the product, Keynes, habitually and inconsistently, arguing as if labor were the only factor of production while at the same time invoking the principle of diminishing marginal productivity.

At UCLA, the point of finding microfoundations was not to create a macroeconomics that would simply reflect the results and optimal properties of a full general equilibrium model. Indeed, what made UCLA approach to microeconomics distinctive was that it aimed at deriving testable implications from relaxing the usual informational and institutional assumptions (full information, zero transactions costs, fully defined and enforceable property rights) underlying conventional microeconomic theory. If the way forward in microeconomics was to move away from the extreme assumptions underlying the perfectly competitive model, then it seemed plausible that relaxing those assumptions would be fruitful in macroeconomics as well. That led Armen Alchian and others at UCLA to think of unemployment as largely a search phenomenon. For a while that approach seemed promising, and to some extent the promise was fulfilled, but many implications of a purely search-theoretic approach to unemployment don’t seem to be that well supported empirically. For example, search models suggest that in recessions, quits increase, and that workers become more likely to refuse offers of employment after the downturn than before. Neither of those implications seems to be true. A search model would suggest that workers are unemployed because they are refusing offers below their reservation wage, but in fact most workers are becoming unemployed because they are being laid off, and in recessions workers seem likely to accept offers of employment at the same wage that other workers are getting. Now it is possible to reinterpret workers’ behavior in recessions in a way that corresponds to the search-theoretic model, but the reinterpretation seems a bit of a stretch.

Even though he was an early exponent of the search theory of unemployment, Alchian greatly admired and frequently cited a 1974 paper by Donald Gordon “A Neoclassical Theory of Keynesian Unemployment,” which proposed an implicit-contract theory of employer-employee relationship. The idea was that workers make long-term commitments to their employers, and realizing their vulnerability, after having committed themselves to their employer, to exploitation by a unilateral wage cut imposed by the employer under threat of termination, expect some assurance from their employer that they will not be subjected to a unilateral demand to accept a wage cut. Such implicit understandings make it very difficult for employers, facing a reduction in demand, to force workers to accept a wage cut, because doing so would make it hard for the employer to retain those workers that are most highly valued and to attract new workers.

Gordon’s theory of implicit wage contracts has a certain similarity to Dennis Carlton’s explanation of why many suppliers don’t immediately raise prices to their steady customers. Like Gordon, Carlton posits the existence of implicit and sometimes explicit contracts in which customers commit to purchase minimum quantities or to purchase their “requirements” from a particular supplier. In return for the assurance of having a regular customer on whom the supplier can count, the supplier gives the customer assurance that he will receive his customary supply at the agreed upon price even if market conditions should change. Rather than raise the price in the event of a shortage, the supplier may feel that he is obligated to continue supplying his regular customers at the customary price, while raising the price to new or occasional customers to “market-clearing” levels. For certain kinds of supply relationships in which customer and supplier expect to continue transacting regularly over a long period of time, price is not the sole method by which allocation decisions are made.

Klein, Crawford and Alchian discussed a similar idea in their 1978 article about vertical integration as a means of avoiding or mitigating the threat of holdup when a supplier and a customer must invest in some sunk asset, e.g., a pipeline connection, for the supply relationship to be possible. The sunk investment implies that either party, under the right circumstances, could threaten to holdup the other party by threatening to withdraw from the relationship leaving the other party stuck with a useless fixed asset. Vertical integration avoids the problem by aligning the incentives of the two parties, eliminating the potential for holdup. Price rigidity can thus be viewed as a milder form of vertical integration in cases where transactors have a relatively long-term relationship and want to assure each other that they will not be taken advantage of after making a commitment (i.e., foregoing other trading opportunities) to the other party.

The search model is fairly easy to incorporate into a standard framework because search can be treated as a form of self-employment that is an alternative to accepting employment. The shape and position of the individual’s supply curve reflects his expectations about future wage offers that he will receive if he chooses not to accept employment in the current period. The more optimistic the worker’s expectation of future wages, the higher the worker’s reservation wage in the current period. The more certain the worker feels about the expected future wage, the more elastic is his supply curve in the neighborhood of the expected wage. Thus, despite its empirical shortcomings, the search model could serve as a convenient heuristic device for modeling cyclical increases in unemployment because of the unwillingness of workers to accept nominal wage cuts. From a macroeconomic modeling perspective, the incorrect or incomplete representation of the reason for the unwillingness of workers to accept wage cuts may be less important than the overall implication of the model, which is that unanticipated aggregate demand shocks can have significant and persistent effects on real output and employment. For example in his reformulation of macroeconomic theory, Earl Thompson, though he was certainly aware of Donald Gordon’s paper, relied exclusively on a search-theoretic rationale for Keynesian unemployment, and I don’t know (or can’t remember) if he had a specific objection to Gordon’s model or simply preferred to use the search-theoretic approach for pragmatic modeling reasons.

At any rate, these comments about the role of search models in modeling unemployment decisions are meant to illustrate why microfoundations could be useful for macroeconomics: by adding to the empirical content of macromodels, providing insight into the decisions or circumstances that lead workers to accept or reject employment in the aftermath of aggregate demand shocks, or why employers impose layoffs on workers rather than offer employment at reduced wages. The spectrum of such microeconomic theories of employer-employee relationships have provided us with a richer understanding of what the term “sticky wages” might actually be referring to, beyond the existence of minimum wage laws or collective bargaining contracts specifying nominal wages over a period of time for all covered employees.

In this context microfoundations meant providing a more theoretically satisfying, more micreconomically grounded explanation for a phenomenon – “sticky wages” – that seemed somehow crucial for generating the results of the Keynesian model. I don’t think that anyone would question that microfoundations in this narrow sense has been an important and useful area of research. And it is not microfoundations in this sense that is controversial. The sense in which microfoundations is controversial is whether a macroeconomic model must show that aggregate quantities that it generates can be shown to consistent with the optimizing choices of all agents in the model. In other words, the equilibrium solution of a macroeconomic model must be such that all agents are optimizing intertemporally, subject to whatever informational imperfections are specified by the model. If the model is not derived from or consistent with the solution to such an intertemporal optimization problem, the macromodel is now considered inadequate and unworthy of consideration. Here’s how Michael Woodford, a superb economist, but very much part of the stifling microfoundations consensus that has overtaken macroeconomics, put in his paper “The Convergence in Macroeconomics: Elements of the New Synthesis.”

But it is now accepted that one should know how to render one’s growth model and one’s business-cycle model consistent with one another in principle, on those occasions when it is necessary to make such connections. Similarly, microeconomic and macroeconomic analysis are no longer considered to involve fundamentally different principles, so that it should be possible to reconcile one’s views about household or firm behavior, or one’s view of the functioning of individual markets, with one’s model of the aggregate economy, when one needs to do so.

In this respect, the methodological stance of the New Classical school and the real business cycle theorists has become the mainstream. But this does not mean that the Keynesian goal of structural modeling of short-run aggregate dynamics has been abandoned. Instead, it is now understood how one can construct and analyze dynamic general-equilibrium models that incorporate a variety of types of adjustment frictions, that allow these models to provide fairly realistic representations of both shorter-run and longer-run responses to economic disturbances. In important respects, such models remain direct descendants of the Keynesian macroeconometric models of the early postwar period, though an important part of their DNA comes from neoclassical growth models as well.

Woodford argues that by incorporating various imperfections into their general equilibrium models, e.g.., imperfectly competitive output and labor markets, lags in the adjustment of wages and prices to changes in market conditions, search and matching frictions, it is possible to reconcile the existence of underutilized resources with intertemporal optimization by agents.

The insistence of monetarists, New Classicals, and early real business cycle theorists on the empirical relevance of models of perfect competitive equilibrium — a source of much controversy in past decades — is not what has now come to be generally accepted. Instead, what is important is having general-equilibrium models in the broad sense of requiring that all equations of the model be derived from mutually consistent foundations, and that the specified behavior of each economic unit make sense given the environment created by the behavior of the others. At one time, Walrasian competitive equilibrium models were the only kind of models with these features that were well understood; but this is no longer the case.

Woodford shows no recognition of the possibility of multiple equilibria, or that the evolution of an economic system and time-series data may be path-dependent, making the long-run neutrality propositions characterizing most DSGE models untenable. If the world – the data generating mechanism – is not like the world assumed by modern macroeconomics, the estimates derived from econometric models reflecting the worldview of modern macroeconomics will be inferior to estimates derived from an econometric model reflecting another, more accurate, world view. For example, if there are many possible equilibria depending on changes in expectational parameters or on the accidental deviations from an equilibrium time path, the idea of intertemporal optimization may not even be meaningful. Rather than optimize, agents may simply follow certain simple rules of thumb. But, on methodological principle, modern macroeconomics treats the estimates generated by any alternative econometric model insufficiently grounded in the microeconomic principles of intertemporal optimization as illegitimate.

Even worse from the perspective of microfoundations are the implications of something called the Sonnenchein-Mantel-Debreu Theorem, which, as I imperfectly understand it, says something like the following. Even granting the usual assumptions of the standard general equilibrium model — continuous individual demand and supply functions, homogeneity of degree zero in prices, Walras’s Law, and suitable boundary conditions on demand and supply functions, there is no guarantee that there is a unique stable equilibrium for such an economy. Thus, even apart from the dependence of equilibrium on expectations, there is no rationally expected equilibrium because there is no unique equilibrium to serve as an attractor for expectations. Thus, as I have pointed out before, as much as macroeconomics may require microfoundations, microeconomics requires macrofoundations, perhaps even more so.

Now let us compare the methodological demand for microfoundations for macroeconomics, which I would describe as a kind of macroeconomic methodological reductionism, with the reductionism of Newtonian physics. Newtonian physics reduced the Keplerian laws of planetary motion to more fundamental principles of gravitation governing the motion of all bodies celestial and terrestrial. In so doing, Newtonian physics achieved an astounding increase in explanatory power and empirical scope. What has the methodological reductionism of modern macroeconomics achieved? Reductionsim was not the source, but the result, of scientific progress. But as Carlaw and Lipsey demonstrated recently in an important paper, methodological reductionism in macroeconomics has resulted in a clear retrogression in empirical and explanatory power. Thus, methodological reductionism in macroeconomics is an antiscientific exercise in methodological authoritarianism.


About Me

David Glasner
Washington, DC

I am an economist in the Washington DC area. My research and writing has been mostly on monetary economics and policy and the history of economics. In my book Free Banking and Monetary Reform, I argued for a non-Monetarist non-Keynesian approach to monetary policy, based on a theory of a competitive supply of money. Over the years, I have become increasingly impressed by the similarities between my approach and that of R. G. Hawtrey and hope to bring Hawtrey's unduly neglected contributions to the attention of a wider audience.

Archives

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 1,916 other followers

Follow Uneasy Money on WordPress.com
Advertisements