Capitalism, with friends like these, you don’t need enemies

Though I have been interested in ecological economics ever since I read The Limits to Growth (Meadows, Randers, and Meadows 1972), E.F. Schumacher (Schumacher 1973, 1979) and Hermann Daly (Daly 1974) in the early 1970s, and I have been a critic of Neoclassical economics for just as long, I didn’t start critiquing the Neoclassical approach to climate change until 2019. This was because, though I expected it to be bad, I felt that I could not critique it until after I had made a positive contribution to ecological economics myself.

This is a chapter in my draft book Rebuilding Economics from the Top Down, which will be published early in 2024 by the Budapest Centre for Long-Term Sustainability

If you like my work, please consider becoming a paid subscriber from as little as $10 a year on Patreon, or $5 a month on Substack

That occurred when, while working with the brilliant pioneer of the economics of energy Bob Ayres, the aphorism “labour without energy is a corpse, while capital without energy is a sculpture” occurred to me, and enabled me to work out how to bring energy into mathematical models of production in a fundamental way, using the concepts explained in the last chapter. In June 2019, after our paper “A Note on the Role of Energy in Production” (Keen, Ayres, and Standish 2019, p. 41) had been published—and after William Nordhaus had been awarded the “Nobel” Prize in Economics in 2018 for his work on the economics of climate change—I sat down to read the Neoclassical literature, commencing with Richard Tol’s overview paper “The Economic Effects of Climate Change” (Tol 2009).

Minutes later, I read the sentences quoted below, and I was both horrified, and very regretful of my decision to delay taking this area on:

An alternative approach, exemplified in Mendelsohn’s work, can be called the statistical approach. It is based on direct estimates of the welfare impacts, using observed variations (across space within a single country) in prices and expenditures to discern the effect of climate. Mendelsohn assumes that the observed variation of economic activity with climate over space holds over time as well; and uses climate models to estimate the future effect of climate change. (Tol 2009, p. 32. Emphasis added)

This assumption was patently absurd—as I explain in the next chapter—and yet it had been published in a “top five” economics journal (Bornmann, Butz, and Wohlrabe 2018; Mixon and Upadhyaya 2022). Worse, as I read the literature in detail, I found that, though many other aspects of the Neoclassical economics of climate change had been criticized by other economists, (Kaufmann 1997, 1998; Darwin 1999; Quiggin and Horowitz 1999; DeCanio 2003; Schlenker, Hanemann, and Fisher 2005; Ackerman and Stanton 2008; Stanton, Ackerman, and Kartha 2009; Ackerman, Stanton, and Bueno 2010; Weitzman 2011b, 2011a; Ackerman and Munitz 2012; Pindyck 2013, 2017), no-one had criticised it for what I saw as its most obvious flaw: the simply ludicrous “data” to which Neoclassical models of climate change had been fitted.

The empirical assumptions that economists specialising in climate change have made are so—to be frank—stupid, that, even if their mathematical models perfectly captured the actual structure of the global economy precisely (which of course they don’t), their forecasts of economic damages from climate change would still be ludicrously low. They are also so obviously wrong that the mystery is why these assumptions were ever published. Therefore, before I discuss their work on climate change, I have to take a diversion into the topics of scientific and economic methodology.

  1. “Simplifying Assumptions”, Milton Friedman, and the “F-twist”

As I noted in Chapter 5, every survey that has ever been done of the cost structure of real-world firms has returned a result that contradicts Neoclassical economic theory. Rather than firms facing rising marginal cost because of diminishing marginal productivity, the vast majority of real-world firms operate with substantial excess capacity, and therefore experience constant or even rising marginal productivity as production increases. This means that marginal cost either remains constant or falls with output, rather than rising, as mainstream economic theory assumes.

In the 1930s, 40s and early 50s, a large number of papers were published reporting on these results in the leading journals of the discipline, such as Oxford Economic Papers (Hall and Hitch 1939; Tucker 1940; Andrews 1941, 1949, 1950; Andrews and Brunner 1950), the Quarterly Journal of Economics (Eiteman 1945), and the American Economic Review (Means 1936; Tucker 1937; Garver et al. 1938; Tucker 1938; Lester 1946; Oliver 1947; Eiteman 1947; Lester 1947; Eiteman 1948; Eiteman and Guthrie 1952; Eiteman 1953). These papers made the point that, since marginal cost is either constant or falling, the mainstream profit-maximisation rule, of equating marginal revenue to marginal cost, must be wrong.

Writing in the American Economic Review, Eiteman put it this way in 1947: an engineer designs a factory:

so as to cause the variable factor to be used most efficiently when the plant is operated close to capacity. Under such conditions an average variable cost curve declines steadily until the point of capacity output is reached. A marginal curve derived from such an average cost curve lies below the average curve at all scales of operation short of peak production, a fact that makes it physically impossible for an enterprise to determine a scale of operations by equating marginal cost and marginal revenues unless demand is extremely inelastic. (Eiteman 1947, p. 913. Emphasis added)

One might have expected that economists would have reacted to this empirical discovery by realising that economic theory had to change. But instead, in “The Methodology of Positive Economics” (Friedman 1953`), Milton Friedman argued that economists should ignore these papers, and criticism of economics for being unrealistic in general, on the basis that the more significant a theory was, the more “unrealistic” its assumptions would be—an argument that Samuelson dubbed “The F-twist” (Archibald, Simon, and Samuelson 1963; Wong 1973):

In so far as a theory can be said to have “assumptions” at all, and in so far as their “realism” can be judged independently of the validity of predictions, the relation between the significance of a theory and the “realism” of its “assumptions” is almost the opposite of that suggested by the view under criticism. Truly important and significant hypotheses will be found to have “assumptions” that are wildly inaccurate descriptive representations of reality, and, in general, the more significant the theory, the more unrealistic the assumptions (in this sense). The reason is simple. A hypothesis is important if it “explains” much by little, that is, if it abstracts the common and crucial elements from the mass of complex and detailed circumstances surrounding the phenomena to be explained and permits valid predictions on the basis of them alone. To be important, therefore, a hypothesis must be descriptively false in its assumptions… (Friedman 1953, p. 14. Emphasis added)

He followed up with an attack on the significance of the papers which pointed out that marginal cost does not rise with output (as well as an attack on the model of imperfect competition):

The theory of monopolistic and imperfect competition is one example of the neglect in economic theory of these propositions. The development of this analysis was explicitly motivated, and its wide acceptance and approval largely explained, by the belief that the assumptions of “perfect competition” or, “perfect monopoly” said to underlie neoclassical economic theory are a false image of reality. And this belief was itself based almost entirely on the directly perceived descriptive inaccuracy of the assumptions rather than on any recognized contradiction of predictions derived from neoclassical economic theory. The lengthy discussion on marginal analysis in the American Economic Review some years ago is an even clearer, though much less important, example. The articles on both sides of the controversy largely neglect what seems to me clearly the main issue—the conformity to experience of the implications of, the marginal analysis—and concentrate on the largely irrelevant question whether businessmen do or do not in fact reach their decisions by consulting schedules, or curves, or multivariable functions showing marginal cost and marginal revenue. (Friedman 1953, p. 15. Emphasis added)

Friedman ridiculed the survey methods behind this research:

The abstract methodological issues we have been discussing have a direct bearing on the perennial criticism of “orthodox” economic theory as “unrealistic”… A particularly clear example is furnished by the recent criticisms of the maximization-of-returns hypothesis on the grounds that businessmen do not and indeed cannot behave as the theory “assumes” they do. The evidence cited to support this assertion is generally taken either from the answers given by businessmen to questions about the factors affecting their decisions—a procedure for testing economic theories that is about on a par with testing theories of longevity by asking octogenarians how they account for their long life—or from descriptive studies of the decision-making activities of individual firms. Little if any evidence is ever cited on the conformity of businessmen’s actual market behaviour—what they do rather than what they say they do—with the implications of the hypothesis being criticized, on the one hand, and of an alternative hypothesis, on the other. (Friedman 1953, pp. 30-31. Emphasis added)

And he also ridiculed the search for more realism in general:

A theory or its “assumptions” cannot possibly be thoroughly “realistic” in the immediate descriptive sense so often assigned, to this term. A completely “realistic” theory of the wheat market would have to include not only the conditions directly underlying the supply and demand for wheat but also the kind of coins or credit instruments used to make exchanges; the personal characteristics of wheat-traders such as the color of each trader’s hair and eyes, his antecedents and education, the number of members of his family, their characteristics, antecedents, and education, etc.; the kind of soil on which the wheat was grown, its physical and chemical characteristics, the weather prevailing during the growing season; the personal characteristics of the farmers growing the wheat and of the consumers who will ultimately use it; and so on indefinitely. Any attempt to move very far in achieving this kind of “‘realism” is certain to render a theory utterly useless. (Friedman 1953, p. 32)

Friedman’s paper merely codified the standard retort that economists have always made when their assumptions have been challenged, but since his paper, he has been cited as the authority when needed. However, his paper had a definite if perverse effect on the development of Neoclassical theory: though he cautioned in a footnote that “The converse of the proposition does not of course hold: assumptions that are unrealistic (in this sense) do not guarantee a significant theory”, his claim almost led to an arms race amongst economists to make the most unrealistic assumptions possible.

  1. Domain Assumptions, Paradigms, and Scientific Revolutions

In the paper “‘Unreal Assumptions’ in Economic Theory: The F‐Twist Untwisted” (Musgrave 1981), the philosopher Alan Musgrave explained that Friedman’s dictum was true of “simplifying assumptions”, but utterly false when applied to what he called “domain assumptions”.

A simplifying assumption is a decision to omit some aspect of the real world which, if you included it, would make your model vastly more complicated, but only change your results very slightly. The items Friedman lists in his example of a “completely “realistic” theory of the wheat market” are unrealistic instances of this: an economic model including “the color of each trader’s hair and eyes” would be vastly more complicated, and would obviously have no effect on the model’s predictive power, but why would anyone bother creating such a model?

A more realistic example of a simplifying assumption is Galileo’s apocryphal proof that objects of different weight fall at the same speed by dropping lead balls out of the Leaning Tower of Pisa. Such an experiment “assumes” that the balls are being dropped in a vacuum. Taking air resistance into account would result in a vastly more complicated experiment, but the result would be much the same, because given the height of the Leaning Tower of Pisa, and the density and weight of lead balls, the simplifying assumption that the existence of air resistance can be ignored is reasonable.

But a domain assumption is completely different: this is an assumption which determines whether your model applies or not. If your domain assumption applies, then your theory also applies, and is valid; if it does not, then your theory does not apply and is invalid. Therefore, domain assumptions should be realistic, otherwise the resulting theory will be false. Realism in domain assumptions is essential.

This is why the target of Friedman’s ire is so important: he was defending, not a simplifying assumption, but a domain assumption which is false.

The Neoclassical theory of profit maximisation—that a firm maximizes profit by equating marginal cost and marginal revenue—applies if firms have rising marginal cost. However, the papers that Friedman advised economists not to read revealed that, for the vast majority of firms, marginal cost did not rise, and for the reasons given by Eiteman earlier: factories are designed by engineers to be most efficient at maximum output. As the mathematics in Chapter 5 showed, in this real-world situation, marginal revenue is always greater than marginal cost, and the sensible profit maximisation strategy is to sell as many units as possible. Making the domain assumption that real-world factories experience diminishing marginal productivity (and therefore rising marginal cost) is a domain assumption which leads to a false theory of profit maximisation for the real world.

This is why the results of Blinder’s survey were, as Alan Blinder put it, “overwhelmingly bad news … (for economic theory) (Blinder 1998, p. 102), because the theory of supply and demand falls apart:

  • There is no supply curve: as Blinder acknowledges, rising marginal cost is “enshrined in every textbook and employed in most economic models. It is the foundation of the upward-sloping supply curve” (Blinder 1998, p. 101).
  • Though sales necessarily equal purchases, this is not a point of equilibrium: the sellers have excess capacity and could sell more units if they could find buyers.
  • The welfare-maximising conclusions of Neoclassical economics also fall by the wayside: rather than the market equating marginal revenue and marginal cost, thus maximising consumer utility subject to the constraints of producer cost, there is a gap between the two that explains the evolutionary competitive process we actually see in the real world.
  • That competitive struggle leads to a power law distribution of firm sizes—which again, we see in the real world (Axtell 2001, 2006). Perfect competition is not a desirable state, but a myth. A very different—and much richer—theory is required to understand actual competition and actual microeconomic behaviour than the toy models of Neoclassical economics.

Acknowledging this empirical research into actual firm costs, in other words, requires a revolution in economic thought, just as Galileo’s experiment led to a revolution in scientific thought. And just as other intellectuals and the Catholic Church resisted Galileo’s findings, because they overturned beliefs that they had held for millennia, Neoclassical economists resisted these findings, because they overturn the Marshallian paradigm to which they are wedded.

This reflects the phenomenon noted by the philosopher of science Thomas Kuhn (Kuhn 1970) and the physicist Max Planck (Planck 1949), that most scientists, once they are committed to a paradigm, continue to cling to it, even after encountering contrary evidence.

Blinder’s own reaction to his own research is both instructive and pathetic. A decade after he found that diminishing marginal productivity does not apply to real-world firms, Blinder continued to teach in his textbook that real-world firms are subject to diminishing marginal productivity, and experience rising marginal cost (Baumol and Blinder 2011, pp. 127-133). In fact, Blinder’s discovery clearly disturbed him so much that the explanation he gives for diminishing marginal productivity, and one of the examples he gives of it, are both wrong, even from the point of view of Neoclassical economics.

Blinder’s behaviour is evidence of how disturbing the results of his own research were to Blinder himself, and is also a vivid example of the mental gymnastics that believers in a failed paradigm are willing to undertake to avoid abandoning the paradigm. If Blinder acknowledged his own research and followed through its consequences as I did in Chapter 5, then he could no longer be a Neoclassical economist. So instead, he ignored his own results, and does not even mention his own research into this topic in his own textbook!

Blinder’s reaction to his discovery is not an exception: it is the norm. When Neoclassical researchers finds results that contradict Neoclassical theory, they almost always make outrageous assumptions to cover them up, and so to persuade themselves that they haven’t broken the paradigm. They then describe these outrageous propositions as simplifying assumptions. Here is a by-no-means complete selection of such assumptions:

  • In 1953, William Gorman considered the question of whether a country could have its tastes represented by a single “community preference field”—which is a common assumption in the Neoclassical theory of international trade. He concluded that:

there is just one community indifference locus through each point if, and only if, the Engel curves for different individuals at the same prices are parallel straight lines. (Gorman 1953, p. 63)

This amounts to the assumption, not merely that all individuals have the same tastes (otherwise “Engel curves” would intersect), but also that all commodities are identical (otherwise they would not be straight lines). He then commented—as I noted earlier—that:

The necessary and sufficient condition quoted above is intuitively reasonable. It says, in effect, that an extra unit of purchasing power should be spent in the same way no matter to whom it is given. (Gorman 1953, p. 63)

This is not a simplifying assumption: it is an insane, false, assumption made to cling to the belief that Neoclassical trade theory is valid.

  • In 1956, Paul Samuelson, considering a related problem—whether a downward-sloping market demand curve could be derived from summing individuals who all had downward-sloping individual demand curve—concluded that this could be done for a family:

if within the family there can be assumed to take place an optimal reallocation of income so as to keep each member’s dollar expenditure of equal ethical worth, then there can be derived for the whole family a set of well-behaved indifference contours relating the totals of what it consumes: the family can be said to act as if it maximizes such a group preference function. (Samuelson 1956, p. 21)

He immediately generalised this to the whole of society:

The same argument will apply to all of society if optimal reallocations of income can be assumed to keep the ethical worth of each person’s marginal dollar equal. (Samuelson 1956, p. 21. Emphasis added)

So, a downward sloping market demand can be derived if we’re willing to assume that someone—”a benevolent central authority perhaps”
(Mas-Colell, Whinston, and Green 1995, pp. 117. Emphasis added), to cite a textbook that teaches this result to students—reallocates income before consumption “to keep the ethical worth of each person’s marginal dollar equal“.

This is not a simplifying assumption: it is an insane, false, assumption made to cling to the belief that the Neoclassical theory of demand is valid.

  • In 1964, William Sharpe tried to construct a theory of asset pricing by firstly building an elaborate model of a single individual allocating his investments between a risk-free interest-paying bond and a spectrum of risky assets. Having built this model of a single individual, he extended it to a model of all investors by assuming:

homogeneity of investor expectations: investors are assumed to agree on the prospects of various investments investments-the expected values, standard deviations and correlation coefficients described in Part II (Sharpe 1964, pp. 433-34)

Not only that, as Fama confirmed when reporting on the (surprise, surprise!) empirical failure of this theory forty years later:

And this distribution is the true one—that is, it is the distribution from which the returns we use to test the model are drawn. (Fama and French 2004, p. 26)

It is little wonder that a theory of stock market prices that assumed that all investors were able to accurately predict the future failed!

This is not a simplifying assumption: it is an insane, false, assumption made to cling to the belief that the Neoclassical theory of asset prices is valid.

None of this, nor the myriad other examples I could cite, represents the behaviour of a scientific community. It is instead the behaviour of a cult, hanging on to its core beliefs despite repeated contradictions of those beliefs by reality.

Ironically, this is not unique to Neoclassical economists—Marx did the same thing when he developed a philosophical explanation for the source of value which contradicted his “Labour Theory of Value” (Keen 1993a, 1993b). Nor, even, is it unique to economists alone. Max Planck, the brilliant physicist who ushered in the era of quantum mechanics, lamented in his autobiography that:

It is one of the most painful experiences of my entire scientific life that I have but seldom—in fact, I might say, never—succeeded in gaining universal recognition for a new result, the truth of which I could demonstrate by a conclusive, albeit only theoretical proof. (Planck 1949, p. 22)

What is unique to economists is that the anomalies in science which show that a dominant paradigm—say, the Maxwellian one about the nature of energy—is false are permanent. Once the anomaly has been discovered, it can be recreated at any time by anyone with the necessary equipment. Therefore, even if existing scientists refuse to abandon the falsified paradigm, they are ultimately replaced by new scientists who, as students, know that the anomaly exists, and that they will make their intellectual mark if they can resolve the anomaly with a new paradigm. Planck’s explanation for how sciences advance was paraphrased as “science advances one funeral at a time”, but what he actually described was generational change:

A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it. (Planck 1949, p. 23)

These mechanisms of change do not exist in economics. Firstly, anomalies in economics are transient. If the anomaly is an event—like, say, The Great Depression—then it can be forgotten as new events take its place. If the anomaly is a theoretical result—like the “Cambridge Controversies” over the nature of capital, which Samuelson conceded that Neoclassicals lost, ending his paper “A summing up” with this poignant concession:

If all this causes headaches for those nostalgic for the old-time parables of neoclassical writing, we must remind ourselves that scholars are not born to live an easy existence. We must respect, and appraise, the facts of life. (Samuelson 1966)

This can be forgotten by later Neoclassical economists if Neoclassical economists—including Samuelson himself—continuing to behave as if they in fact won the debate, by continuing to teach the concepts, such as the marginal productivity theory of income distribution, which this debate proved was false, and in which he conceded defeat. Consequently, later Neoclassicals can think that they actually won arguments that Neoclassical participants at the time actually conceded that they lost. This is Paul Krugman, writing in 2014:

And what’s going on here, I think, is a fairly desperate attempt to claim that the Great Recession and its aftermath somehow prove that Joan Robinson and Nicholas Kaldor were right in the Cambridge controversies of the 1960s. It’s a huge non sequitur, even if you think they were indeed right (which you shouldn’t.) But that’s what seems to be happening.

This unscientific behaviour by Neoclassical economists have several consequences that enabled the dangerous nonsense I detail in the next chapter to be published.

Firstly, they are raised in a virtual ocean of unrealistic domain assumptions, while at the same time these false assumptions are essential to the Panglossian vision they have of capitalism. That makes them almost blind to the assumptions in a Neoclassical economics paper: they read the method by which the results were derived from the assumptions, rather than casting a critical eye over the assumptions themselves.

Secondly, the essential role of these assumptions is to preserve the Neoclassical vision of capitalism, and not to preserve capitalism itself—which they innately believe is indestructible anyway. They become zealots for market solutions above all other approaches to remedying society’s ills.

Thirdly, the inability to understand the role of energy, and raw materials in general, in enabling human society to evolve, has led to a training of economists devoid of any real knowledge of the biophysical nature of existence.

This combination of foibles—accepting almost any assumptions, so long as they preserve the vision of capitalism as a self-regulating system, believing that capitalism can in fact survive any threat, and having virtually no knowledge of the physical nature of production—has had the fatal result that they could not accept that climate change could be a serious threat to capitalism. So when William Nordhaus “proved” this result, subject of course to some “simplifying assumptions”, they were happy to accept such assumptions without even considering that they were insanely false assumptions about the nature of the biosphere.

 

 

Ackerman, Frank, and Charles Munitz. 2012. ‘Climate damages in the FUND model: A disaggregated analysis’, Ecological Economics, 77: 219-24.

Ackerman, Frank, and Elizabeth A. Stanton. 2008. ‘A comment on “Economy-wide estimates of the implications of climate change: Human health”‘, Ecological Economics, 66: 8-13.

Ackerman, Frank, Elizabeth A. Stanton, and Ramón Bueno. 2010. ‘Fat tails, exponents, extreme uncertainty: Simulating catastrophe in DICE’, Ecological Economics, 69: 1657-65.

Andrews, P. W. S. 1941. ‘A survey of industrial development in great britain planned since the commencement of the war’, Oxford Economic Papers, 5: 55-71.

———. 1949. ‘A reconsideration of the theory of the individual business: costs in the individual business; the determination of prices’, Oxford Economic Papers: 54-89.

———. 1950. ‘SOME ASPECTS OF COMPETITION IN RETAIL TRADE’, Oxford Economic Papers, 2: 137-75.

Andrews, P. W. S., and Elizabeth Brunner. 1950. ‘PRODUCTIVITY AND THE BUSINESS MAN’, Oxford Economic Papers, 2: 197-225.

Archibald, G. C., Herbert A. Simon, and Paul A. Samuelson. 1963. ‘Problems of Methodology Discussion’, The American Economic Review, 53: 227-36.

Axtell, Robert L. 2001. ‘Zipf Distribution of U.S. Firm Sizes’, Science (American Association for the Advancement of Science), 293: 1818-20.

———. 2006. “Firm Sizes: Facts, Formulae, Fables and Fantasies.” In, edited by Center on Social and Economic Dynamics.

Baumol, W. J., and Alan Blinder. 2011. Economics: Principles and Policy.

Blinder, Alan S. 1998. Asking about prices: a new approach to understanding price stickiness (Russell Sage Foundation: New York).

Bornmann, Lutz, Alexander Butz, and Klaus Wohlrabe. 2018. ‘What are the top five journals in economics? A new meta-ranking’, Applied Economics, 50: 659-75.

Daly, Herman E. 1974. ‘Steady-State Economics versus Growthmania: A Critique of the Orthodox Conceptions of Growth, Wants, Scarcity, and Efficiency’, Policy Sciences, 5: 149-67.

Darwin, Roy. 1999. ‘The Impact of Global Warming on Agriculture: A Ricardian Analysis: Comment’, The American Economic Review, 89: 1049-52.

DeCanio, Stephen J. 2003. Economic models of climate change : a critique (Palgrave Macmillan: New York).

Eiteman, Wilford J. 1945. ‘The Equilibrium of the Firm in Multi-Process Industries’, The Quarterly Journal of Economics, 59: 280-86.

———. 1947. ‘Factors Determining the Location of the Least Cost Point’, The American Economic Review, 37: 910-18.

———. 1948. ‘The Least Cost Point, Capacity, and Marginal Analysis: A Rejoinder’, The American Economic Review, 38: 899-904.

———. 1953. ‘The Shape of the Average Cost Curve: Rejoinder’, The American Economic Review, 43: 628-30.

Eiteman, Wilford J., and Glenn E. Guthrie. 1952. ‘The Shape of the Average Cost Curve’, The American Economic Review, 42: 832-38.

Fama, Eugene F., and Kenneth R. French. 2004. ‘The Capital Asset Pricing Model: Theory and Evidence’, The Journal of Economic Perspectives, 18: 25-46.

Friedman, Milton. 1953. ‘The Methodology of Positive Economics.’ in, Essays in positive economics (University of Chicago Press: Chicago).

Garver, Frederick B., Gustav Seidler, L. G. Reynolds, Francis M. Boddy, and Rufus S. Tucker. 1938. ‘Corporate Price Policies’, The American Economic Review, 28: 86-89.

Gorman, W. M. 1953. ‘Community Preference Fields’, Econometrica, 21: 63-80.

Hall, R. L., and C. J. Hitch. 1939. ‘Price Theory and Business Behaviour’, Oxford Economic Papers: 12-45.

Hausman, Daniel M. 2007. The Philosophy of Economics: An Anthology (Cambridge University Press: Cambridge).

Kaufmann, Robert K. 1997. ‘Assessing The Dice Model: Uncertainty Associated With The Emission And Retention Of Greenhouse Gases’, Climatic Change, 35: 435-48.

———. 1998. ‘The impact of climate change on US agriculture: a response to Mendelssohn et al. (1994)’, Ecological Economics, 26: 113-19.

Keen, Steve. 1993a. ‘The Misinterpretation of Marx’s Theory of Value’, Journal of the history of economic thought, 15: 282-300.

———. 1993b. ‘Use-Value, Exchange Value, and the Demise of Marx’s Labor Theory of Value’, Journal of the history of economic thought, 15: 107-21.

Keen, Steve, Robert U. Ayres, and Russell Standish. 2019. ‘A Note on the Role of Energy in Production’, Ecological Economics, 157: 40-46.

Kuhn, Thomas. 1970. The Structure of Scientific Revolutions (University of Chicago Press: Chicago).

Lester, Richard A. . 1946. ‘Shortcomings of Marginal Analysis for Wage-Employment Problems’, The American Economic Review, 36: 63-82.

———. 1947. ‘Marginalism, Minimum Wages, and Labor Markets’, The American Economic Review, 37: 135-48.

Mas-Colell, Andreu, Michael Dennis Whinston, and Jerry R. Green. 1995. Microeconomic theory (Oxford University Press: New York :).

Meadows, Donella H., Jorgen Randers, and Dennis Meadows. 1972. The limits to growth (Signet: New York).

Means, Gardiner C. 1936. ‘Notes on Inflexible Prices’, The American Economic Review, 26: 23-35.

Mixon, Franklin G., and Kamal P. Upadhyaya. 2022. ‘Top to bottom: an expanded ranking of economics journals’, Applied economics letters, 29: 226-37.

Musgrave, Alan. 1981. ”Unreal Assumptions’ in Economic Theory: The F‐Twist Untwisted’, Kyklos (Basel), 34: 377-87.

Oliver, Henry M. 1947. ‘Marginal Theory and Business Behavior’, The American Economic Review, 37: 375-83.

Pindyck, Robert S. 2013. ‘Climate Change Policy: What Do the Models Tell Us?’, Journal of Economic Literature, 51: 860-72.

———. 2017. ‘The Use and Misuse of Models for Climate Policy’, Review of Environmental Economics and Policy, 11: 100-14.

Planck, Max. 1949. Scientific Autobiography and Other Papers (Philosophical Library; Williams & Norgate: London).

Quiggin, John, and John Horowitz. 1999. ‘The impact of global warming on agriculture: A Ricardian analysis: Comment’, The American Economic Review, 89: 1044-45.

Samuelson, Paul A. 1956. ‘Social Indifference Curves’, The Quarterly Journal of Economics, 70: 1-22.

———. 1966. ‘A Summing Up’, Quarterly Journal of Economics, 80: 568-83.

Schlenker, Wolfram, W. Michael Hanemann, and Anthony C. Fisher. 2005. ‘Will U.S. Agriculture Really Benefit from Global Warming? Accounting for Irrigation in the Hedonic Approach’, The American Economic Review, 95: 395-406.

Schumacher, E. F. 1973. Small is beautiful : a study of economics as if people mattered / E.F. Schumacher (Blond and Briggs: London).

———. 1979. ‘On Population and Energy Use’, Population and Development Review, 5: 535-41.

Sharpe, William F. 1964. ‘Capital Asset Prices: A Theory of Market Equilibrium under Conditions of Risk’, The Journal of Finance, 19: 425-42.

Stanton, Elizabeth A., Frank Ackerman, and Sivan Kartha. 2009. ‘Inside the integrated assessment models: Four issues in climate economics’, Climate and Development, 1: 166-84.

Tol, Richard S. J. 2009. ‘The Economic Effects of Climate Change’, The Journal of Economic Perspectives, 23: 29–51.

Tucker, Rufus S. 1937. ‘Is There a Tendency for Profits to Equalize?’, The American Economic Review, 27: 519-24.

———. 1938. ‘The Reasons for Price Rigidity’, The American Economic Review, 28: 41-54.

———. 1940. ‘The Degree of Monopoly’, The Quarterly Journal of Economics, 55: 167-69.

Weitzman, Martin L. 2011a. ‘Fat-Tailed Uncertainty in the Economics of Catastrophic Climate Change’, Review of Environmental Economics and Policy, 5: 275-92.

———. 2011b. ‘Revisiting Fat-Tailed Uncertainty in the Economics of Climate Change’, REEP Symposium on Fat Tails, 5: 275–92.

Wong, Stanley. 1973. ‘The “F-Twist” and the Methodology of Paul Samuelson’, The American Economic Review, 63: 312-25.