The Macrofoundations of Macroeconomics

I share Blanchard’s vision that “The pursuit of a widely accepted analytical macroeconomic core, in which to locate discussions and extensions, may be a pipe dream, but it is a dream surely worth pursuing”. But he—and Neoclassical economics in general—err with the false belief that “Starting from explicit microfoundations is clearly essential; where else to start from?” (Blanchard 2016, p. 3). The answer to Blanchard’s purportedly rhetorical question is that the proper foundation of macroeconomics is not microeconomic theory, but macroeconomics itself.

This is chapter six of my draft book Rebuilding Economics from the Top Down, which will be published later this year by the Budapest Centre for Long-Term Sustainability

The previous chapter was I’M NOT DISCREET, AND NEITHER IS TIME, which is published here on Patreon     and here on Substack.
If you like my work, please consider becoming a paid subscriber from as little as $10 a year on Patreon, or $5 a month on Substack

This may sound paradoxical: how can your foundations be what you are trying to build on those foundations? But in fact, macroeconomic definitions which all economists must accept—simply because they are both true by definition, and essential to the study of the macroeconomy—can easily be turned into dynamic statements that enable the development of a realistic macroeconomic dynamics.

This process of building macroeconomics from macroeconomic definitions yields simple models which fit the data with far less use of arbitrary parameters than Neoclassicals impose on their “microfounded” models—and no use at all of carefully calibrated “exogenous shocks”—and which can be easily extended and made more realistic by adding further definitions (Keen 2020).

Since this chapter—and the models in it—is necessarily complex, I’ll start with its key takeaways. Working directly from incontestably true macroeconomic definitions, it is obvious that:

  • Capitalism is an inherently cyclical system (rather than an equilibrium system);
  • It is liable to collapse into a debt-deflation; but
  • It can be stabilized by counter-cyclical government spending.

These conclusions are the opposite of the a priori biases of Neoclassical microeconomics. And, these results are derived from definitions that all economists must accept, which are turned into dynamic models using empirically realistic simplifying assumptions. The contrary Neoclassical beliefs that capitalism tends towards equilibrium, that debt-deflations are impossible given the (as usual, false) assumptions of the Loanable Funds model of banking, and that government intervention almost always makes the social welfare outcome worse, are based on foundations that are rotten, both intellectually and empirically.

  1. Inherent Complexity and Cyclicality

Though, as noted in Chapter 3, three dimensions are needed to generate a fully complex system, two fundamental definitions are sufficient to demonstrate how different this approach is to Neoclassical modelling—and how realistic it is as well. These two definitions are the employment rate, and wages share of GDP: the former characterises the level of economic activity, and the latter the distribution of income.

The employment rate is how many people are employed, divided by the population; the wages share of GDP is the total wage bill, divided by GDP. Using for the employment rate, L for employment, N for population (because I’ll later use P for Prices), for the wages share of GDP, W for total wages, and Y for GDP, the starting definitions for a genuinely well-founded macroeconomics are:

        

and

        

Applying the rules discussed in the previous chapter to Equation yields:

        

The same operation applied to Equation yields:

        

These two equations make two extremely simple, and obviously true, dynamic statements:

  • The employment rate will rise if employment rises faster than population; and
  • The wages share of GDP will rise if total wages rise faster than GDP.

Deriving a dynamic model from these true-by-definition statements is a straightforward task that I have put in a later section, so that I can focus here on the essential point that a realistic and inherently cyclical macroeconomic model can be derived directly from macroeconomic definitions which are beyond dispute.

The model derived from these two definitions is shown in Equation :

        

Developing this model required the introduction of several parameters, and their names were chosen so as to make interpreting their meaning relatively easy: KYr is the Capital to Output ratio, for example. The names, meanings and values of of all the parameters in this model are given in Table 3.

Table 3: Parameters in the models

 

There’s are many programs that can simulate this model, from mathematical systems like Mathematica and Maple, to system dynamics programs like Vensim … and Minsky. I use Minsky because (a) I invented it; (b) it’s free; and (c) it’s the only program designed to model the dynamics of money, which becomes critically important in subsequent chapters.

The model is inherently cyclical, as Figure 13 illustrates.

Figure 13: Inherent and endemic cycles in a definitions-based dynamic macroeconomic model

Capitalism is therefore at its core a cyclical, rather than an equilibrium system. The Neoclassical portrayal of capitalism as a system that always returns to equilibrium after a disturbance is both a relic of the 19th century belief that equilibrium was an unfortunate but necessary assumption needed to enable modelling (Jevons 1888, p. 93—a belief which ceased to be valid in the mid-20th century), and a characteristic that Neoclassicals artificially impose on their RBC and DSGE models, because they have elevated equilibrium from a modelling compromise into a critical component of their vision of capitalism as a welfare-maximizing system—which it isn’t.

Given that this model can be derived directly from macroeconomic definitions, with none of the arbitrary assumptions that characterized Ramsey’s derivation of his growth model—let alone the crazy assumptions added by later Neoclassicals to apply Ramsey’s model to the macroeconomy (Solow 2010, p. 13)—this model should be regarded as half of the foundational model of macroeconomics—half because it does not yet include the financial sector or the government, which I add in the next two sections.

This model is, in fact, Richard Goodwin’s “growth cycle” model, which he developed in 1965 (Goodwin 1966; Goodwin 1967). My sole contribution here is to show that, rather than being based on “ad-hoc” equations, Goodwin’s model can be directly and easily derived from strictly true macroeconomic definitions, and straightforward simplifying assumptions.

Goodwin’s model has been neglected in economics, largely because Neoclassical economists abhor non-equilibrium systems, but also because of an unfortunate paper—for which I must confess that I was one of the referees who recommended its publication—which incorrectly derided its empirical accuracy. Entitled “Testing Goodwin: Growth Cycles in Ten OECD Countries”, it concluded that “At a quantitative level, Goodwin’s … estimated parameter values poorly predict the cycles’ centres” (Harvie 2000, p. 359).

In fact, this conclusion was due to a mistake by Harvie that he later frankly described to me as a “typical schoolboy error”: he used numbers in percentages, when his work had been done in fractions. That put his numbers out by a factor of 100—a fact that I only discovered when I attempted to use his parameters in a model. This mistake was corrected by (Grasselli and Maheshwari 2017), who found that the properly calibrated model was consistent with the data for OECD countries.

Figure 14 illustrates this with respect to US data from 1948 till 1968: using historically reasonable parameter values, the equilibrium of the model in Figure 13 precisely reproduces the average value for the employment rate and wages share between 1948 and 1968, even though the model is still very incomplete., The model also reproduces the cyclicality of the empirical data—something that a Neoclassical model cannot do without adding (carefully calibrated!) “exogenous shocks”—though not the actual magnitude of those cycles.

 

Figure 14: USA Employment and Wages Share Dynamics from 1948 till 1968

The next section completes this as a model of a pure capitalist economy by introducing the financial sector, in the form of the private debt to GDP ratio (in a later chapter, I explain why private debt is an essential component of the foundational model of macroeconomics).

  1. Debt-Deflation in a Pure Credit Economy

The debt ratio dr is the level of private debt (DP) divided by GDP:

        

In dynamic form, this definition is:

        

As usual, this equation has a straightforward verbal interpretation: the private debt ratio will rise if private debt grows faster than GDP.

Several modifications are required to variables in the previous model to integrate private debt dynamics into it. Debt means interest payments, so the rate of interest r was added as a parameter (it can be a variable in more elaborate models); profit is now net of interest payments as well as of wages; and Goodwin’s extreme assumption that capitalists invest all their profits is replaced by an investment function iG, (based on the rate of profit r) which has the same form as the wage change function in the previous model, and which assumes—rather too generously—that all debt is used to finance productive investment. As explained in the Section 7.6, this results in the following 3-equation system:

        

Here gr stands for the growth rate; w stands for the wage change function; iG stands for gross investment (investment before depreciation) and is a function of the rate of profit; and the profit share s is introduced, since it plays a significant—and surprising—role in the dynamics of the model.

The parameters for the investment function, and the interest rate, are shown in Table 4.

Table 4: Parameters added to the Goodwin model to include private debt

#With this model, “we’re not in Kansas anymore”,
when compared to the Neoclassical view of reality. The model can reach equilibrium, but most likely it will not. It will appear to be heading for equilibrium, only to cycle away from it (Pomeau and Manneville 1980). The people who don’t borrow in this simple model—workers—are the ones who pay the cost of borrowing, via a lower share of national income. The people who do borrow—capitalists—don’t pay the cost, but they are also the last ones to realise that, when the system is unstable, it is headed for a breakdown that will bankrupt them. Finally, the direct beneficiaries of rising private debt—bankers—end up owning everything of nothing.

The technical reason for this much greater complexity of this model over Goodwin’s is the fact that a model needs three dimensions in order to display complex behaviour—”Period Three Implies Chaos”, as Li and Yorke put it (Li and Yorke 1975). Goodwin’s model, with just two dimensions (the employment rate and the wages share ), is constrained by the nature of differential equations to display a very limited range of dynamic behaviours. But when the third dimension of the private debt ratio dr is added, much more complex and realistic behaviours can be generated.

Figure 15 shows a run of the model that does converge to equilibrium (though very slowly: it takes a millennium for the cycles to become imperceptible).

Figure 15: The model with investment and capital to output parameters that lead to equilibrium

With the parameter values used in Figure 15, the new system-state in this model—the level of private debt—has an equilibrium which is comparable to the level of the 1950s. But as Figure 16 illustrates, this was no equilibrium: the ratio rose substantially, and normally constantly, until hitting a peak of 170% of GDP in 2008.

Figure 16: USA Private Debt level since WWII (https://www.bis.org/statistics/full_data_sets.htm)

If we choose parameter values for the model that generate this peak level of private debt as an equilibrium—by changing the slope of the investment function from 5 to 5.86—then we get an entirely different class of dynamics from this model. What Costa-Lima and Grasselli characterized as the “good equilibrium” of this model (Costa Lima, Grasselli, Wang, and Wu 2014, p. 35) becomes an unstable “strange attractor”, while the “bad equilibrium”—of a zero level of employment, a zero wages share, and an infinite debt ratio—becomes a stable attractor. If run for long enough, the model eventually collapses into zero wages share, zero employment, and an infinite debt ratio.

One emergent property of this model is that, with parameter values that lead to a private debt crisis, the volatility of the model declines prior to the crisis, in a manner which was replicated by the real world in the “Great Moderation” that preceded the “Great Recession” of 2007. This phenomenon is more obvious with nonlinear behavioural functions, which are applied in the model shown in Figure 18. The nonlinear functions shown in Figure 17 are generalized exponentials. These give a consistent curvature compared to a linear function, and rule out anomalies like negative investment. I’ve used linear functions for workers’ wage demands and capitalist investment decisions thus far, not because they’re more realistic—far from it—but because their use confirms that the cyclical behaviour of the models is driven, not by assumptions imposed by the modeller, but by the structure of the economy itself.

Figure 17: Nonlinear versus linear functions for wage change, investment and government spending change

With these functions and parameter values, the pure credit model undergoes a debt-induced crisis—see Figure 18.

 

Figure 18: A Private-Debt-induced Crisis with a nonlinear investment function

This still very simple and stylized model has an important real-world implication, which I noted in the conclusion to my first paper on this topic, “Finance and Economic Breakdown: Modelling Minsky’s Financial Instability Hypothesis” (Keen 1995): a period of tranquillity in a capitalist economy is not inherently a good thing, but can in fact be a warning that a crisis is approaching:

From the perspective of economic theory and policy, this vision of a capitalist economy with finance requires us to go beyond that habit of mind that Keynes described so well, the excessive reliance on the (stable) recent past as a guide to the future. The chaotic dynamics explored in this paper should warn us against accepting a period of relative tranquility in a capitalist economy as anything other than a lull before the storm. (Keen 1995, p. 634. Emphasis added)

In contrast, equilibrium-obsessed Neoclassical economists saw the “Great Moderation” as a “welcome change to the economy” (Bernanke 2004), and actually attributed it to their successful management of the economy:

The sources of the Great Moderation remain somewhat controversial, but as I have argued elsewhere, there is evidence for the view that improved control of inflation has contributed in important measure to this welcome change in the economy. (Bernanke 2004)

The other emergent property of the model is equally striking: though firms (capitalists) are the ones doing the borrowing in this model, it is the workers who pay for rising debt via a declining workers’ share of income. Profits cycle around the equilibrium level until the crisis, while workers’ incomes decline as a direct effect of the rising income share going to banks.

This model is, of course, a mathematical rendition of Hyman Minsky’s “Financial Instability Hypothesis” (Minsky 1975, 1982). Though other specialists on Minsky emphasise his classification of finance into Hedge, Speculative and Ponzi Finance, and the change in the relative proportions of these financial archetypes through the business cycle, my favourite expression of the FIH as a dynamic process is the following from “The Financial Instability Hypothesis: An Interpretation of Keynes and an Alternative to ‘Standard’ Theory”:

The natural starting place for analyzing the relation between debt and income is to take an economy with a cyclical past that is now doing well. The inherited debt reflects the history of the economy, which includes a period in the not-too-distant past in which the economy did not do well. Acceptable liability structures are based upon some margin of safety so that expected cash flows, even in periods when the economy is not doing well, will cover contractual debt payments.

As the period over which the economy does well lengthens, two things become evident in board rooms. Existing debts are easily validated and units that were heavily in debt prospered; it paid to lever. After the event it becomes apparent that the margins of safety built into debt structures were too great. As a result, over a period in which the economy does well, views about acceptable debt structure change. In the deal-making that goes on between banks, investment bankers, and businessmen, the acceptable amount of debt to use in financing various types of activity and positions increases. This increase in the weight of debt financing raises the market price of capital assets and increases investment. As this continues the economy is transformed into a boom economy…

It follows that the fundamental instability of a capitalist economy is upward. The tendency to transform doing well into a speculative investment boom is the basic instability in a capitalist economy. (Minsky 1977, pp. 12-13; 1982, pp. 66-67. Emphasis added)

Minsky’s genius was his capacity to see this fundamental instability of capitalism, free of the hobbling Neoclassical assumption of equilibrium, and to relate this to the level of private debt—an insight he garnered from Fisher (Fisher 1933) rather than from Keynes, whom he didn’t properly appreciate until he read the brief essay “The General Theory of Employment” (Keynes 1937) in 1968 (Minsky 1969a, p. 9, footnote 6; 1969b, p. 225).

We now have three-quarters of a foundational dynamic model of capitalism. The final element needed to complete this model (prior to the introduction of prices) is a government sector.

  1. Stabilising an Unstable Economy

A government sector is added by applying Kalecki’s insight that net government spending adds to profits. Kalecki began with Table 5, which shows GDP in terms of both income (the left-hand column) and expenditure:

Table 5:Kalecki’s basic Income and Expenditure Identity (Kalecki 1954, p. 45)

Assuming that workers spend all of their incomes, Kalecki equated Gross Profits to the sum of Gross Investment and Capitalists’ Consumption, and then asked the causal question: which comes first? His answer was that:

it is clear that capitalists may decide to consume and to invest more in a given period than in the preceding one, but they cannot decide to earn more. It is, therefore, their investment and consumption decisions which determine profits, and not vice versa. (Kalecki 1954, p. 46. Emphasis added)

Kalecki then included government expenditure and taxes and exports, and allowed for some saving by workers. Simplifying his final equation somewhat, this led to the relationship that:

        

The model in this section follows Kalecki (Kalecki 1954, p. 49) by redefining profit to include net government spending (spending in excess of taxation), adds an equation for the rate of change of net government spending as a function of the level of unemployment, and a differential equation for government net spending:

        

Figure 19 simulates this system with nonlinear behavioural functions and the same system parameters as the unstable private-sector-only model in Figure 17, to illustrate the result that, as Hyman Minsky argued, “big government virtually ensures that a great depression cannot happen again” (Minsky 1982, p. xxxii). However, the process is more complex—fittingly—than Minsky could envisage with purely verbal reasoning.

The definitive treatment of the dynamic and stability properties of this model is given by Costa-Lima, Grasselli et al. in “Destabilizing a stable crisis: Employment persistence and government intervention in macroeconomics” (Costa Lima, Grasselli, Wang, and Wu 2014), though their model was more complicated than the one shown here.

The obvious outcome that the model remains cyclical, but does not undergo a breakdown, is due to a characteristic of complex dynamic systems known as persistence. Given the commonality of this phenomenon in real-world systems, and the unfamiliarity of economists with tis concept, it is worth quoting their paper at length:

Persistence theory studies the long-term behaviour of dynamical systems, in particular the possibility that one or more variables remain bounded away from zero. Typical questions are, for example, which species in a model of interacting species will survive over the long-term, or whether it is the case that in an endemic model an infection cannot persist in a population due to the depletion of the susceptible population.

In our context, we are interested in establishing conditions in economic models that prevent one or more key economic variables, such as the employment rate, from vanishing… we prove … that under a variety of alternative mild conditions on government subsidies, the model describing the economy is uniformly weakly persistent with respect to the employment rate

[W]e can guarantee that the employment rate does not remain indefinitely trapped at arbitrarily small values. This is in sharp contrast with what happens in the model without government intervention, where the employment rate is guaranteed to converge to zero and remain there forever if the initial conditions are in the basin of attraction of the bad equilibrium corresponding to infinite debt levels… no matter how disastrous the initial conditions are, a sufficiently responsive government can bring the economy back from a state of crises associated with zero employment rates…

On the other hand … austerity implies that the government cannot prevent the economy from remaining trapped in the basin of attraction of at least one of the bad equilibria, which is of course an undesirable outcome. (Costa Lima, Grasselli, Wang, and Wu 2014, pp. 31, 37. Emphasis added)

 

Figure 19: Net government spending stabilizes the previously unstable system

  1. Conclusion

Though these models are still very simple, they generate a picture of the economy that is at once totally different to the Neoclassical fantasy of eternal (but sometimes exogenously shocked) equilibrium, and empirically much easier to fit to actual data. In the next Chapter I show that these models can also be developed by following the causal approach of system dynamics, which also make it easier to add further real-world complexities to the basic models. Blanchard’s dream of “a widely accepted analytical macroeconomic core, in which to locate discussions and extensions” is alive and well, but only if the dead-end of Neoclassical equilibrium modelling is abandoned.

 

 

Bellino, Enrico. 2013. ‘On the stability of the Ramsey accumulation path (MPRA Paper No. 44024).’ in Levrero S., Palumbo A. and Stirati A. (eds.), Sraffa and the Reconstruction of Economic Theory (Palgrave Macmillan: Houndmills, Basingstoke, Hampshire, UK, 2013. ).

Bernanke, Ben S. 2004. “Panel discussion: What Have We Learned Since October 1979?” In Conference on Reflections on Monetary Policy 25 Years after October 1979. St. Louis, Missouri: Federal Reserve Bank of St. Louis.

Blanchard, Olivier. 2016. ‘Do DSGE Models Have a Future? ‘, Peterson Institute for International Economics. https://www.piie.com/publications/policy-briefs/do-dsge-models-have-future.

Blatt, John M. 1983. Dynamic economic systems: a post-Keynesian approach (Routledge: New York).

Costa Lima, B., M. R. Grasselli, X. S. Wang, and J. Wu. 2014. ‘Destabilizing a stable crisis: Employment persistence and government intervention in macroeconomics’, Structural Change and Economic Dynamics, 30: 30-51.

Fisher, Irving. 1933. ‘The Debt-Deflation Theory of Great Depressions’, Econometrica, 1: 337-57.

Goodwin, R. M. 1966. ‘Cycles and Growth: A growth cycle’, Econometrica, 34: 46.

Goodwin, Richard M. 1967. ‘A growth cycle.’ in C. H. Feinstein (ed.), Socialism, Capitalism and Economic Growth (Cambridge University Press: Cambridge).

Grasselli, Matheus R., and Aditya Maheshwari. 2017. ‘A comment on ‘Testing Goodwin: growth cycles in ten OECD countries”, Cambridge Journal of Economics, 41: 1761-66.

Harvie, David. 2000. ‘Testing Goodwin: Growth Cycles in Ten OECD Countries’, Cambridge Journal of Economics, 24: 349-76.

Jevons, William Stanley. 1888. The Theory of Political Economy ( Library of Economics and Liberty: Internet).

Kalecki, M. 1954. Theory of Economic Dynamics: An Essay on Cyclical and Long-Run Changes in Capitalist Economy

(MacMillan: London).

Keen, Steve. 1995. ‘Finance and Economic Breakdown: Modeling Minsky’s ‘Financial Instability Hypothesis.”, Journal of Post Keynesian Economics, 17: 607-35.

———. 2020. ‘Emergent Macroeconomics: Deriving Minsky’s Financial Instability Hypothesis Directly from Macroeconomic Definitions’, Review of Political Economy, 32: 342-70.

Keynes, J. M. 1937. ‘The General Theory of Employment’, The Quarterly Journal of Economics, 51: 209-23.

Li, Tien-Yien, and James A. Yorke. 1975. ‘Period Three Implies Chaos’, The American Mathematical Monthly, 82: 985-92.

Minsky, Hyman P. 1969a. ‘The New Uses of Monetary Powers’, Nebraska Journal of Economics and Business, 8: 3-15.

———. 1969b. ‘Private Sector Asset Management and the Effectiveness of Monetary Policy: Theory and Practice’, Journal of Finance, 24: 223-38.

———. 1975. John Maynard Keynes (Columbia University Press: New York).

———. 1977. ‘The Financial Instability Hypothesis: An Interpretation of Keynes and an Alternative to ‘Standard’ Theory’, Nebraska Journal of Economics and Business, 16: 5-16.

———. 1982. Can “it” happen again? : essays on instability and finance (M.E. Sharpe: Armonk, N.Y.).

Pomeau, Yves, and Paul Manneville. 1980. ‘Intermittent transition to turbulence in dissipative dynamical systems’, Communications in Mathematical Physics, 74: 189-97.

Ramsey, F. P. 1928. ‘A Mathematical Theory of Saving’, The Economic Journal, 38: 543-59.

Solow, R. M. 2010. “Building a Science of Economics for the Real World.” In House Committee on Science and Technology Subcommittee on Investigations and Oversight. Washington.

 

I’m not Discreet, and Neither is Time

That appalling pun highlights a problem with economic modellers in general: the treatment of time as a discrete rather than a continuous phenomenon. Though there are (not enough!) exceptions, the habit amongst both Neoclassical and Post-Keynesians economists is to model an economic variable now as depending on other variables one year in the past (and for Neoclassicals, the future).

This is chapter six of my draft book Rebuilding Economics from the Top Down, which will be published later this year by the Budapest Centre for Long-Term Sustainability

The previous chapter was Profit Maximization in the Real World, which is published here on Patreon     and here on Substack.
If you like my work, please consider becoming a paid subscriber from as little as $10 a year on Patreon, or $5 a month on Substack

 

For example, the Smets and Wouters Neoclassical DSGE model discussed in Chapter 1 has the following equation to represent consumption {Smets, 2007 #1356`, Equation 2`, p. 588}. The details of the equation are unimportant. What I want you to notice is how time is modelled, with ct representing consumption this year, ct-1 consumption the previous year, ct+1 expectations of consumption next year, and so on:

    

Similarly, the Post-Keynesian economists Lavoie and Zezza have an equation for consumption which also has consumption this year depending on variables from the year before {Lavoie, 2020 #6348`, Equation 25`, p. 465}:

    

This is simply the wrong way to model time in an economic model, but it is the norm amongst economists of all paradigmatic persuasions. There is an alternative approach—which is the norm in genuine sciences—of modelling time using differential equations, and that’s what I will use in the remainder of this book.

Unfortunately, the practice of using discrete-time equations (otherwise known as “difference equations”: equations in terms of t, t-1, etc.) is so prevalent, and so accepted not just by Neoclassical economists, but by the staff who teach mathematical methods to young economists, that I have to spend some time dissing it first.

  1. Friends Don’t Let Friends Use Difference Equations

Firstly, ask yourself, does your consumption today really depend on your income last year?

Of course not! Unless you are a billionaire, your expenditure today will be influenced by your income in the last week to month, not a year.

But that’s reasoning at the level of a single individual, and macroeconomic modelling, which is the focus of this book, is about aggregates. This brings in another level of distortion. The use of a difference equation for aggregate investment, for example implies that the investment decisions of all firms occur at the same frequency: that they are coordinated somehow. This is nonsense. Individual firms invest at very different frequencies to each other: a fast-food delivery service might invest on a monthly basis; a semiconductor producer might have a decade-long investment horizon.

The sum of many such asynchronous investment decisions ends up looking like a flow of investment decisions over time—which is the sort of thing that a differential equation portrays much more easily than a difference equation.

Thirdly, why do economic models use a one-year time-delay for everything? Fundamentally, because of laziness: it’s just easy to use a delay of 1 for everything.

There’s also a technical reason for this. Let’s say you took my criticism here seriously, but decided to still use difference equations, and to make your time-step a week rather than a year. Then you could have a difference equation for consumption with a time step of a week, which is reasonable, and one for investment with a time step of 52 weeks (a year), which is also reasonable. But then, to simulate your model you would need to provide 52 “initial conditions” for investment—if you had investment today depending on profits from 52 weeks ago, you would need to supply the model with 52 weekly values for profits (and investment) before it could be simulated.

Worse still, if you found by empirical research that the time delay in investment was actually 78 weeks, you would have to redesign your model to reflect that—and supply the additional 26 initial conditions as well.

This is where the laziness comes from: it’s so hard to do difference equations well, that economists do them badly instead, l and reduce everything to a one year time delay.

Fourthly, difference equations have some qualitative features that differ from those of differential equations, which can introduce spurious dynamics into a model. For example, I noted earlier that three dimensions are needed before a dynamic system can display complex behaviour—which used to be called “chaos”. But that’s only true of continuous-time equations. A one-dimensional difference equation can display chaotic behaviour, and one of the most famous such system is the logistic difference equation:

    

The pattern it generates is indeed fascinating—see Figure 11—and it has some real-world applications, notably in population dynamics for species with high reproduction rates.

Figure 11: Chaotic behaviour in the discrete-time logistic equation

But if you’re trying to model a process in an economic model that has logistic characteristics, then using a discrete-time form, when the process is better described by the continuous form, will introduce spurious dynamics. Figure 12 shows how a continuous time logistic equation behaves. This might be more boring, but it will also be more realistic. A difference-equation model will generate more “interesting” dynamics, but simply because it uses the wrong modelling approach, not because the underlying economic system is truly chaotic (though as you will see, the macroeconomy can be chaotic—or rather complex).

Figure 12: Convergent behaviour in the continuous-time logistic equation

Some economists think that the discrete nature of economic data necessitates using discrete time models to reproduce it. The definitive riposte to this argument was given by the “father of system dynamics”, the engineer Jay Forrester, when he first encountered—and was seriously dismayed by—economic modelling in the 1950s. He observed that:

Time intervals of model solutions are often too widely spaced for the predictions being attempted. For example, a model solved annually to arrive at new annual values of economic variables would, if anything, be useful in predicting future trends over a five-year period but not year-to-year variations. As a rough rule-of-thumb, one would want solutions spaced closely enough to define a smooth curve through the fluctuations in which we are interested. {Forrester, 2003 #4915`, p. 334. Emphasis added}

In the concluding part of the paper, entitled “A future approach to model building”, Forrester reiterated that the belief that discrete data necessitated discrete modelling was simply wrong:

The incremental time intervals for which the variables of a model are solved step-by-step in time must be much shorter than often supposed… For models of the national economy as a whole it is unlikely that the time interval can be longer than one month and it is entirely possible that weekly intervals might be necessary.

This solution interval is unrelated to the interval at which national statistics and economic indicators are measured. The model should generate the instantaneous values of the variables which exist in the real system, whether or not these can be measured. The measurability of some of these variables is immaterial to the structure of the model and the incremental time steps through which it is advanced. The frequency of collection of statistics will only determine the frequency with which the model can be compared with reality, and in turn will affect the ease of getting the model structure and coefficients to converge toward their real counterparts.

The incremental time interval used for model solution will, on the other hand, be related to the lengths of the time constants which have been incorporated in the structure of the model. As a rough generalization, it will be necessary to have several solution intervals within the shortest time delay which is recognized in the model. Correspondingly, the model will need several solution intervals in a half-cycle of the highest frequency response which is to be generated at any point in the model. {Forrester, 2003 #4915`, p. 343. Emphasis added}

It might be thought that this would make mathematical modelling more complicated—wouldn’t the need for high frequency modelling force you to specify economic processes in intricate detail? In fact, the reverse is the case, because mathematicians have developed techniques for simulating dynamic systems that take care of the time intervals for you (the most famous being the Runge-Kutta algorithm), and these have adaptive mechanisms to improve simulation accuracy and speed as well. Rather than having to worry about the simulation time interval, you can forget about it, and well-established mathematical routines take care of it for you.

Another frequently made objection to continuous time methods is that economic decisions, such as investment, are based on lagged data, rather than current data, and therefore period analysis is needed to capture these lags. For example, Godley & Lavoie 2007 assume:

that governments react to lagged inflation rates, rather than to actual or expected inflation rates, on the realistic grounds that fiscal policy may have a reaction time somewhat longer than monetary policy. {Godley, 2007 #1240`, p. 92}

Therefore, they use the two equations shown in Equation to represent “real pure government expenditures” g, and the “growth rate of real pure government expenditures”, , where the rate of growth of government expenditure is a function of “the growth rate of potential output” gr, the change in the lagged inflation rate , and the deviation of the lagged inflation rate from the target inflation rate :

         

In fact, lags are easily represented in differential equations, using what is known as a “first-order time lag”, to relate the delayed perception of the rate of inflation to the actual, instantaneous rate of inflation . I’ll use rather than for the time-lagged inflation rate, since a time lag can be any length, not merely “one period”. The time-lagged inflation rate is defined by its rate of convergence to the actual inflation rate, which is given by the “time constant” (which, in an elaborate model, can be a variable if desired) which measures the length of time, in years, that it takes for the perceived rate of inflation to converge to the actual rate of inflation . If =0.5 this is a 6-month lag; if =1, a year, and so on. This rate of convergence is given by the differential equation shown in Equation :

         

Similarly, the growth rate of government expenditure is expressed as a differential equation:

         

The variable growth rate can now defined as something like Equation , or it could be replaced with its own differential equation.

         

This approach is vastly superior to the discrete approach to time lags (which is more correctly called a time-delay, rather than a time-lag), for many reasons.

Time-lags are flexible. Your lag can be a fraction of a year, or multiple years, or even an irrational number if you wish: it doesn’t have to be 1,2, 3 “time periods”, as in conventional economic modeling. And of course, I’m being generous in saying that! As already noted, economic models use a time delay of “1 period” for almost everything. In Lavoie and Godley 2007, interest payments have a lag of -1 (equation 1); spending is negatively related to the interest rate with a lag of -1 (equation 2); taxes on wealth are lagged -1 (equation 7). This is typical. Factors which in the real world occur at vastly different frequencies—consumption, for example, has a much higher frequency than investment—are all corralled into the same arbitrary frequency.

Therefore, the time-delays (not time-lags) in discrete time economic models—which is to say, the vast majority of economic models—are spurious. They have nothing to do with the actual characteristics of time-dependent actions in the real economy. Time lags, on the other hand, can be derived from empirical data. They are also easy to edit: a time lag is a simple scalar, and if you find that you’re using the wrong value—say, data shows that the time lag in investment is actually 1.5 years when your model uses 3 years—then all you have to alter is that number. On the other hand, if discrete-time economic models did time delays properly, they would have different delays for consumption (short) versus investment (long). This simply isn’t done. If it were, and then empirical data indicated that the delay was different to what the model used, a wholesale re-writing of the model is necessary.

The final reason for economists using difference equations is simply habit. It’s what they’ve always done, therefore a century later, it’s what they still do: if there’s a difference equation approach at hand, that’s what they’ll use, even if differential equation methods are superior.

For that reason, I have not enabled difference equation logic at all in Minsky. Given how inappropriate difference-equation models are for modelling the economy, and yet how much they are used by economists, Minsky
deliberately does not support time-delays: “friends don’t let friends use periods”. We may need to introduce time-delays at some point, to enable the importing of models from other system dynamics programs, but if so, they will exist solely for that purpose.

With these issues covered, it’s time to turn from critique to construction.

  1. A Simple Approach to Dynamics Using Ratios

Ordinary Differential Equations (ODEs) are hard, and “Partial Differential Equations” (PDEs) even more so. If you want to become truly fluent in dynamic modelling, then I recommend doing mathematics courses in calculus, differential equations, and linear algebra (which you need to work out the stability properties of dynamic systems).

But it’s also possible to do a lot of dynamic modelling using something with which everyone is familiar: percentages (or rather, ratios). If you say, for example, that “the rate of growth of GDP is 2.3% per year”, you are stating a differential equation: you are saying that the rate of change of GDP equal 0.023 times the current value of GDP. The following are equivalent statements:

        

The ratio form is the most useful, since the logic of logarithms lets you convert division and multiplication of ratios into addition and subtraction. For example, if you have a variable x in your model, then mathematicians indicate its rate of growth by putting a “hat” over the variable:

        

If x is the ratio of one variable X to another Y, then then the ratio of the rate of change can be converted into the rate of change ratio of X minus the rate of change ratio of Y:

        

Similarly, the ratio of rate of change of the product of two ratios is the sum of their rates of change ratios:

        

This simple rule makes it surprisingly easy to build dynamic macroeconomic models using ratios. Macroeconomics abounds with definitions that are ratios: the employment rate, for example is the ratio of the number of people with a job to the population. It is therefore possible to start with a set of definitions and develop a dynamic model. I’ll illustrate this procedure in the next chapter.

  1. A Simple Approach to Dynamics Using System Dynamics

System dynamics, which was invented by Jay Forrester in the 1950s, is simply a way to generate systems of ordinary differential equations. So long as the flowchart forms a causal loop, it generates a set of differential equations which can be of daunting complexity. But if you can read the flowchart, you can understand the model.

Building an insightful system dynamics model isn’t easy, but it is far easier than the laborious methods that Neoclassical economists use to purport to derive macroeconomic models from microeconomic: when Blanchard noted that the fool’s errand of deriving a macroeconomic model from microeconomic concepts implied “a long slog from the competitive model to a reasonably plausible description of the economy” {Blanchard, 2016 #5194`, p. 3}, he wasn’t joking. I remember discussing modelling some detail of the monetary system with Michael Kumhof of the Bank of England—who is the only Neoclassical economist who understands that banks create money {Kumhof, 2015 #5120. Michael remarked it would take of the order of months to add it to his DSGE model. I replied that it would take me of the order of minutes to do the same thing in Minsky. I’ll illustrate that point in the next few Chapters.

Profit Maximization in the Real World

An essential component of Neoclassical microeconomics is the proposition that firms maximize their profits by equating marginal revenue—the addition to total revenue caused by the last unit sold—to marginal cost—the addition to total costs caused by the last unit produced. This proposition plays a fundamental role in all DSGE-based macroeconomic models too, in the form of profit maximization rules for the competitive and monopolistic industry sectors that are integral parts of these models. The former maximize profits by setting price equal to marginal cost: since they lack market power, for them, the market price is a constant, and therefore price equals marginal revenue. The latter set price at a markup above marginal cost, at the point where marginal cost equals marginal revenue.

This is chapter five of my draft book Rebuilding Economics from the Top Down, which will be published later this year by the Budapest Centre for Long-Term Sustainability

The previous chapter was The “Anything Goes” Market Demand Curve, which is published here on Patreon and here on Substack.
If you like my work, please consider becoming a paid subscriber from as little as $10 a year on Patreon, or $5 a month on Substack

The formulas in DSGE models that apply these rules are normally far more complicated than the simple rules taught in microeconomic textbooks, but they are based on the same principles: setting marginal revenue equal to marginal cost maximizes profits, because these are the slopes respectively of total revenue and total cost. When the slopes of the total revenue and total cost curves are the same, the gap between them—which is total profit—is maximized.

There are logical problems with this argument (Keen and Standish 2010, 2015), but there is a far more important problem: the conclusion that a firm maximises profits by equating marginal revenue and marginal cost requires that marginal cost rises with output, but this condition does not apply for the vast majority of firms in the real world.

The logical basis of the condition is the “The Law of Diminishing Returns”, that output rises as variable inputs rise, but at a decreasing pace. To cite Samuelson and Nordhaus’s textbook again:

Under the law of diminishing returns, a firm will get less and less extra output when it adds additional units of an input while holding other inputs fixed. In other words, the marginal product of each unit of input will decline as the amount of that input increases, holding all other inputs constant. (Samuelson and Nordhaus 2010, pp. 108-109)

The Law of Diminishing Returns then generates the standard model of a firm’s cost structure, in which marginal costs are steeply rising, the average total cost curve is U-shaped, and average fixed costs are relatively small—see Figure 5, which is taken from Samuelson and Nordhaus’s textbook.

Figure 5: The standard textbook model of the cost structure of the representative firm (Samuelson and Nordhaus 2010, p. 131)

The Law of Diminishing Returns is logically correct, given its assumptions, as is the shape of the cost curves derived from it. But it is also, to quote the humourist H. L. Mencken, “neat, plausible, and wrong”. When economists have investigated the actual cost structures of real firms—as opposed to the imaginary ones with which economics textbooks are populatedthe universal result has been that, for the vast majority of firms, marginal cost is either constant, or falls with increasing output.

The last economist to discover this was Alan Blinder, who ranks as highly as Blanchard in the pantheon of influential Neoclassical economists: he was Vice-Chair of the Federal Reserve, Vice-President of the American Economic Association, and was and remains a prominent “New Keynesian” macroeconomist.

A distinguishing feature of “New Keynesian” DSGE models, compared to “New Classical” RBC (“Real Business Cycle”) models, is that New Classicals assume that prices adjust instantly to clear markets, whereas New Keynesians assume that prices are “sticky”—and hence that some unemployment of resources is involuntary. New Keynesians came up with many theories as to why prices should be sticky, each with different implications for the economy. To resolve this debate, Blinder decided to do a survey of American manufacturing firms, to see if their prices were indeed “sticky”, and if so, why.

The survey itself was enormous: 200 firms were directly interviewed, and their output accounted for “7.6 percent of the total value added in the nonfarm, for-profit, unregulated sector” of the US economy (Blinder 1998, p. 67). It was, without a doubt, the largest survey ever undertaken of the cost structures and behaviours of corporations.

It also contained a surprise for Neoclassical economists, which Blinder introduced as follows:

Another very common assumption of economic theory is that marginal cost is rising. This notion is enshrined in every textbook and employed in most economic models. It is the foundation of the upward-sloping supply curve…

The overwhelmingly bad news here (for economic theory) is that, apparently, only 11 percent of GDP is produced under conditions of rising marginal cost. Almost half is produced under constant MC … But that leaves a stunning 40 percent of GDP in firms that report declining MC functions. (Blinder 1998, pp. 101-102. Emphasis added)

Blinder summarised this result in what is possibly the ugliest graph ever published in an economics book—see Figure 6.

Figure 6: Blinder’s graphical summary of his survey’s findings on the shape of the marginal cost curve

In fact, virtually all of Blinder’s empirical findings were a surprise to Neoclassical economists. For my purposes here, the second-largest surprise (after the discovery that marginal costs are either constant or falling) was that average fixed costs were extremely high—compared to conventional economic theory—at on average 44% of average total costs at the firm’s normal operating level (see Table 1). Compare that to Samuelson and Nordhaus’s toy model in Figure 5, where average fixed costs are far smaller than average variable and marginal costs. [Footnote 1]

Table 1: Blinder’s summary of his empirical results (Blinder 1998, p. 106)

Blinded concluded that:

While there are reasons to wonder whether respondents interpreted these questions about costs correctly, their answers paint an image of the cost structure of the typical firm that is very different from the one immortalized in textbooks. (Blinder 1998, p. 105. Emphasis added)

However, though these results were a surprise to textbook writers, they were not at all a surprise to anyone who had ever surveyed firms about their cost structures. The definitive survey of such surveys was done by Fred Lee (Lee 1998). He found 71 surveys between 1924 and 1979, all of which found the same result, that marginal costs are constant or falling for the vast majority of firms.

The definitive explanation for this phenomenon—which contradicts the “Law” of Diminishing Returns—was given by Andrews in 1949:

On the usual assumptions, the static law of diminishing returns is held to justify the short-run average-cost curve being drawn as U-shaped. The rising branch of the U, in particular, is justified by the assumption that there will be an optimum dosage of the direct [“variable costs”] cost factors, after which average direct costs rise and, in the end, more than counterbalance the effect of the falling overhead-cost curve [“fixed costs”].

However, the rising part of the average direct-cost curve, and hence of the average total-cost curve, even when it would exist, is not relevant to normal analysis. The normal situation is that the businessman will plan to have reserve capacity, his average-cost curve falling for any outputs that he is likely to meet in practice, and his average direct costs, which the second part of this paper will treat as of crucial importance in the theory of pricing, normally being practically constant for very wide ranges of output. (Andrews, Lee, and Earl 1993, p. 78; Andrews 1949)

The reasons why real-world firms have substantial excess capacity, and therefore do not experience diminishing marginal productivity, are quite simple.

Firstly, Neoclassical textbooks describe factories as cartoonish shambles. For example, this is Mankiw’s explanation of why “Hungry Helen’s Cookie Factory” experiences diminishing marginal productivity:

At first, when only a few workers are hired, they have easy access to Helen’s kitchen equipment. As the number of workers increases, additional workers have to share equipment and work in more crowded conditions. Hence, as more and more workers are hired, each additional worker contributes less to the production of cookies…

when Helen’s kitchen gets crowded, each additional worker adds less to the production of cookies; this property of diminishing marginal product is reflected in the flattening of the production function as the number of workers rises… Because her kitchen is already crowded, producing an additional cookie is quite costly. Thus, as the quantity produced rises, the total-cost curve becomes steeper. (Mankiw 2001, pp. 273, 275)

“Hungry Helen’s Cookie Factory” is, of course, a made-up example: there is no such firm. Nor does “Thirsty Thelma’s Lemonade Stand” exist, nor “Big Bob’s Bagel Bin”: they are all simply products of Mankiw’s febrile (and alliterative) Neoclassical imagination. Equally, “Al’s Building Contractors” is a fictional example that Blinder uses in his textbook (Baumol and Blinder 2011). [Footnote 2]

In the real world, factories are designed by engineers to reach peak performance at or very near capacity. As Eiteman put it in 1947, engineers design factories:

so as to cause the variable factor to be used most efficiently when the plant is operated close to capacity. Under such conditions an average variable cost curve declines steadily until the point of capacity output is reached. A marginal curve derived from such an average cost curve lies below the average curve at all scales of operation short of peak production, a fact that makes it physically impossible for an enterprise to determine a scale of operations by equating marginal cost and marginal revenues unless demand is extremely inelastic. (Eiteman 1947, p. 913)

Real-world manufacturers, when they had the Neoclassical model explained to them, have often felt insulted by it. For example, Eiteman and Guthrie sent a survey to one thousand firms, which asked them to nominate which of 8 curves most closely approximated their average costs (Eiteman and Guthrie 1952)—see Figure 7. Of the 366 firms who replied, literally only one selected Curve 3, which is the one that is most like the standard drawing in Neoclassical textbooks (such as that in Figure 5 on page 1, from (Samuelson and Nordhaus 2010))—see Table 2. On the other hand, over 60% of firms chose Curve 7, and another 34% chose the very similar Curve 6.

Figure 7: Eiteman & Guthrie’s eight hypothetical average total cost curves (Eiteman and Guthrie 1952, pp. 834-835)

Table 2: The responses to Eiteman and Guthrie’s survey on the shape of the average cost curve: “Table II.-Choice of Cost Curve by Companies Without Reference to Number of Products” (Eiteman and Guthrie 1952, p. 837)

Curve Indicated

Number of Companies

Percent

1

0

0.0%

2

0

0.0%

3

1

0.3%

4

3

0.9%

5

14

4.2%

6

113

33.8%

7

203

60.8%

8

0

0.0%

Total

334

100.0%

Eiteman and Guthrie noted that “The replies demonstrate a clear preference of businessmen for curves which do not offer great support to the argument of marginal theorists,” and continued that “If some of the personal comments of those who answered the questionnaires were to be repeated here, they would serve further to emphasize this conclusion”. They cited one businessman, who remarked that:

“‘The amazing thing is that any sane economist could consider No. 3, No. 4 and No. 5 curves as representing business thinking. It looks as if some economists, assuming as a premise that business is not progressive, are trying to prove the premise by suggesting curves like Nos. 3, 4, and 5.” (Eiteman and Guthrie 1952, p. 838)

Secondly, when a factory is first constructed, it is built with expansion of sales in mind: a factory which operates at 100% capacity on day one of its operations is a factory that was too small in the first place.

Thirdly, in a genuinely competitive industry, virtually every firm hopes to increase its market share—and precisely because this will increase its profits. It needs spare capacity in order to do this, and since all firms do it, the aggregate potential output of the industry substantially exceeds the proportion that is used. Even during the boom years of the 1960s, capacity utilization in the American economy never exceeded 90%, and it has trended down ever since—see Figure 8.

Figure 8: Aggregate capacity utilization in the USA: https://fred.stlouisfed.org/series/TCU#

The preconditions needed for diminishing marginal productivity to apply therefore do not exist in the real world. They exist only in the childish imaginations of Neoclassical textbook writers.

In 1949, Andrews sketched the standard real-world situation, in which diminishing marginal productivity does not apply, because a well-managed firm has idle capacity. Rather than having its machinery operated at beyond its optimal variable inputs to fixed capital ratio, as is assumed by economics textbooks, it has substantial spare capacity.

Figure 9: Andrews’ graphical summary of the normal cost curves for a manufacturing business (Andrews, Lee, and Earl 1993, p. 80)

The practical import of this actual representative shape of the cost curves for a manufacturing firm was given by Eiteman in 1948:

marginal cost curves will lie below average cost curves at all points of operation short of capacity. As a consequence, marginal cost curves will no longer intersect marginal revenue curves (1) when average revenue curves are horizontal or (2) when average revenue curves are high and almost horizontal. Under either of these conditions, business managers would simply produce as much goods as the current market would absorb without reference to marginal cost and marginal revenue. (Eiteman 1948, P. 900. Emphasis added)

The import of this is devastating for both Neoclassical micro and macroeconomics.

Neoclassical microeconomics is predicated on the output level of the firm being based on its marginal cost. A profit-maximizing firm produces where marginal cost equals marginal revenue, because that output level maximizes profits: any higher output level than that actually reduces profits. But as Figure 10 illustrates, marginal cost (which equals variable cost when it is a constant with respect to output) lies well below total cost: any firm that did price at marginal cost would rapidly go bankrupt. The real-world profit-maximization condition is to sell as many units as possible—and preferably at the expense of sales to your rivals, with whom you compete not on price but on non-price issues like product differentiation. Hence, Figure 10 shows price P as a constant, not because of a silly Neoclassical assumption like “perfect information”, but because in a mature industry in the real world, additional sales by one competitor normally come at the expense of the sales by another.

Additional output lowers average fixed costs, which are a very significant component of total costs. Lower per unit fixed costs, with constant per unit variable costs and—in a competitive industry where non-price, product-differentiation-based competition dominates—this results in rising total and per-unit profits as capacity utilization rises, right out to the factory’s capacity, as Figure 10 illustrates.

Figure 10: Profit, Revenue and Costs at the target output level for the representative firm

Real-world profit maximization, therefore, involves selling as many units as possible, rather than stopping selling when “marginal cost exceeds marginal revenue”. Any real-world sales manager who told his staff to stop selling would (and should) be sacked.

This is why Blinder described the results of his survey as “overwhelmingly bad news … (for economic theory)”. Not only does the theory not describe the behaviour of real-world firms, but all the welfare conclusions that flow from marginal this equalling marginal that disappear as well—and they have already been destroyed in any case by the logical fallacies in the derivation of the market demand curve covered in the previous chapter.

Macroeconomics that is based on Neoclassical microeconomics inherits these false microeconomic assumptions, and they play critical roles in the algebraic derivation of an RBC or DSGE model. But in the real world, which these models purport to describe, these conditions are strongly violated.

Therefore, even if it were possible to derive macroeconomics from microeconomics—which the previous chapter showed was a fool’s errand for any complex system—then Neoclassical microeconomics is not the right foundation for it.

So, what to do? I’ll give my alternative in the following chapters, after a brief Appendix which lays out the mathematics of real-world profit maximization, and speculates about what a real-world microeconomics might look like.

  1. Appendix: Real World Profit Maximization and Real-World Microeconomics

Eiteman’s conjecture—that the profit-maximisation strategy of real-world firms is to sell as many units as possible—is easily illustrated using Figure 10. A firm with the cost structure of a typical real-world firm has fixed costs of F (which are a substantial component of total costs at the target output level—of the order of 40% of the market price, according to Blinder’s research), constant variable costs per unit AVC=V—so that marginal costs are also constant, and equal to V—total revenue which is equal to the market price P times the quantity sold, and a price level P that substantially exceeds average variable and hence marginal costs: P>V.

The profits of the firm are given by Equation :

The differential of profit with respect to the quantity sold is therefore always positive:

Therefore, as Eiteman said, the best strategy for the firm is to sell as many units as it can. Since its competitors are all trying to do the same thing, while market demand is, for mature industries, a relatively stable fraction of GDP, this leads to the evolutionary competitive struggle we witness in most real-world markets.

We can get a slightly more informative formula by rearranging Equation to show profit per unit:

The differential of profit per unit with respect to q is:

The rate of change of profit with respect to output can also be expressed in terms of fixed cost and profit per unit of output by simply rearranging the definition of profit:

This confirms the graphical intuition in Figure 10. An increase in output means that the height of the rectangle for total fixed costs falls, while the height of the rectangle for total variable costs remains constant, as does the height of the rectangle for total revenue. The falling height of average fixed costs means that the gap between average revenue and average costs grows. Total profit therefore rises with rising output, because the substantial fixed costs of production are spread over a larger number of units sold. Profit per unit grows as fixed costs per unit shrinks.

Given these fundamental problems with both the theory of demand and the model of production, the only thing one can say in favour of Neoclassical microeconomics is that it gives Neoclassical microeconomists something to do. But it is worse than useless in the real world: with of the order of 95% of firms not having the pre-conditions to experience diminishing marginal productivity, and a theory of demand that does not transcend a single individual, the games Neoclassical microeconomists play are just that: games.

This real-world analysis raises the possibility of an evolutionary microeconomics that, while it could not be used to derive macroeconomics, would be compatible with the macroeconomics I will develop in subsequent chapters, and could also say something useful about actual competitive behaviour. An increase in sales by one firm will give it more internally-generated funds to invest and both expand output and differentiate its product further, while the allure of large profits would encourage innovation by small unprofitable firms aspiring to become big profitable ones.

This dynamic, evolutionary process could explain the actual distribution of firm sizes, which as Axtell established, do not conform to the Neoclassical taxonomic classes of “monopoly, oligopoly, and competitive” but instead display a power-law “Zipf” distribution (Axtell 2001, 2006). As Axtell remarked, “The Zipf distribution is an unambiguous target that any empirically accurate theory of the firm must hit” (Axtell 2001, p. 1820). The Neoclassical model has no chance of doing so, while an evolutionary model based on firms where size enables faster growth, and the profitability of large companies encourages product innovation by its smaller rivals, just might be able to reproduce the actual structure of real-world markets.

Andrews, P. W. S. 1949. ‘A reconsideration of the theory of the individual business: costs in the individual business; the determination of prices’, Oxford Economic Papers: 54-89.

Andrews, P. W. S., F. S. Lee, and Peter E. Earl. 1993. The Economics of Competitive Enterprise (Edward Elgar: Aldershot).

Axtell, Robert L. 2001. ‘Zipf Distribution of U.S. Firm Sizes’, Science (American Association for the Advancement of Science), 293: 1818-20.

———. 2006. “Firm Sizes: Facts, Formulae, Fables and Fantasies.” In, edited by Center on Social and Economic Dynamics.

Baumol, W. J., and Alan Blinder. 2011. Economics: Principles and Policy.

Blinder, Alan S. 1991. ‘Why are Prices Sticky? Preliminary Results from an Interview Study’, The American Economic Review, 81: 89-96.

———. 1998. Asking about prices: a new approach to understanding price stickiness (Russell Sage Foundation: New York).

Eiteman, Wilford J. 1947. ‘Factors Determining the Location of the Least Cost Point’, The American Economic Review, 37: 910-18.

———. 1948. ‘The Least Cost Point, Capacity, and Marginal Analysis: A Rejoinder’, The American Economic Review, 38: 899-904.

Eiteman, Wilford J., and Glenn E. Guthrie. 1952. ‘The Shape of the Average Cost Curve’, The American Economic Review, 42: 832-38.

Keen, Steve, and Russell Standish. 2010. ‘Debunking the theory of the firm—a chronology’, Real World Economics Review, 54: 56-94.

———. 2015. ‘Response to David Rosnick’s “Toward an Understanding of Keen and Standish’s Theory of the Firm: A Comment’, World Economic Review, 2015: 130.

Kuhn, Thomas. 1970. The Structure of Scientific Revolutions (University of Chicago Press: Chicago).

Lee, Frederic S. 1998. Post Keynesian Price Theory (Cambridge University Press: Cambridge).

Mankiw, N. Gregory. 2001. Principles of Microeconomics (South-Western College Publishers: Stamford).

Planck, Max. 1949. Scientific Autobiography and Other Papers (Philosophical Library; Williams & Norgate: London).

Samuelson, Paul A., and William D. Nordhaus. 2010. Microeconomics (McGraw- Hill Irwin: New York).

[Footnote 1: The short-run supply curve, according to Neoclassical theory, begins at the point M in Figure 5, at which point fixed costs are just 25% of total costs in Figure 5. As a fraction of total costs, they fall sharply from that point on.]

[Footnote 2: Why did Blinder, a decade after he found that diminishing marginal productivity does not apply to real-world firms (Blinder 1991, 1998), still teach it in his textbook (Baumol and Blinder 2011, pp. 127-133)? This reflects the phenomenon noted by Kuhn (Kuhn 1970)and Planck (Planck 1949), that most scientists, once they are committed to a paradigm, continue to cling to it even after presented with evidence that it is wrong. Blinder’s case has some additional pathos, in that his discovery clearly disturbed him, so much so that the explanation he gives for diminishing marginal productivity, and one of the examples he gives of it, are both wrong, even from the point of view of Neoclassical economics.

Blinder claims that “The ‘law’ of diminishing marginal returns … rests simply on observed facts; economists did not deduce the relationship analytically” (Baumol and Blinder 2011, p. 132), which is simply and doubly false: it was derived from deductive logic rather than observation, and it has been contradicted by observed facts—including Blinder’s own research.

He then gives, as an example of “diminishing marginal productivity”, the case of Chinese grain output over a 15-year period: “In China, for instance, farmers have been using increasingly more fertilizer as they try to produce larger grain harvests to feed the country’s burgeoning population. Although its consumption of fertilizer is four times higher than it was 15 years ago, China’s grain output has increased by only 50 percent. This relationship certainly suggests that fertilizer use has reached the zone of diminishing returns” (Baumol and Blinder 2011, p. 132).

This not an example of “diminishing marginal productivity”, because that deductive concept is based adding more and more variable inputs to a fixed input at a point in time. Any student who used Blinder’s answers in a course taught using another textbook would be failed.

I see Blinder’s behaviour as evidence of how disturbing the results of his own research were to Blinder himself, and as an example of the mental gymnastics that believers in the failed paradigm of Neoclassical economics are willing to undertake to avoid abandoning the paradigm.

Other Neoclassical economists have taken the easier route of not reading Blinder’s research at all. My evidence here is its sales rank on Amazon— 2,538,317th as of September 17th 2023, versus Mas-Colell’s textbook’s rank of 81,426th (even though it is three years’ older than Blinder’s book)—and the trivial number of reviews it has—literally only one, versus 149 for Mas-Colell’s textbook.

I will leave finding out who wrote the solitary review of Blinder’s text as an exercise for the reader: see https://www.amazon.com/Asking-About-Prices-Understanding-Stickiness/dp/0871541211/#customerReviews.]

The “Anything Goes” Market Demand Curve

Solow noted that DSGE models have “a single representative consumer optimizing over infinite time with perfect foresight or rational expectations, in an environment that realizes the resulting plans more or less flawlessly through perfectly competitive forward-looking markets for goods and labor, and perfectly flexible prices and wages” (Solow 2003, Emphasis added). This bizarre construct is the consequence of the theoretical failure that lies below the apparent success of deriving macroeconomics from microeconomics.

This is chapter four of my draft book Rebuilding Economics from the Top Down. The previous chapter was The Impossibility of Microfoundations for Macroeconomics, which is published here on Patreon and here on Substack

If you like my work, please consider becoming a paid subscriber from as little as $10 a year on Patreon, or $5 a month on Substack.
PS Word doesn’t export footnotes or endnotes to blogs, so I’ve inserted footnote text here inside brackets [] and formatted them in italic

Neoclassical theory has an elaborate and internally consistent model of consumption by an individual consumer, from which it can prove, given its “Axioms of Revealed Preference” (Samuelson 1948), that the individual’s demand curve slopes down [Strictly speaking this is the “Hicksian Compensated Demand Curve”, which I explain in Chapter 3 of Debunking Economics (Keen 2011), “The Calculus of Hedonism”.]. An individual demand curve therefore obeys the so-called “Law of Demand” that, to cite Alfred Marshall:

There is then one general law of demand: The greater the amount to be sold, the smaller must be the price at which it is offered in order that it may find purchasers; or, in other words, the amount demanded increases with a fall in price, and diminishes with a rise in price. (Marshall 1890 [1920], p. 99)

Marshall thought that what applied to the individual would also apply to the market, because the “peculiarities in the wants of individuals” would cancel each other out, so that market demand would also fall as the price rose:

In large markets, then—where rich and poor, old and young, men and women, persons of all varieties of tastes, temperaments and occupations are mingled together—the peculiarities in the wants of individuals will compensate one another in a comparatively regular gradation of total demand. (Marshall 1890 [1920], p. 98)

The bad news, which was discovered decades later by a number of mathematical economists (Gorman 1953; Samuelson 1956; Sonnenschein 1972, 1973a, 1973b; Shafer and Sonnenschein 1982; Mantel 1974, 1976; Debreu 1974), was that this was not true: summing the demand curves from many individuals, each of whose personal demand curves obeyed the “Law of Demand”, can generate a market demand curve that does not necessarily slope downwards. (It can instead adopt any shape at all that you can draw that doesn’t cross itself, and which doesn’t generate two quantity outputs for one relative price input. This does not mean that empirically derived demand curves behave this way, by the way—since empirical data will inherently include the effect of the distribution of income. What it does mean is that Neoclassical economists cannot derive a core aspect of their model from their own core assumptions.)

The only way to avoid this result was to make the absurd assumptions that changes in relative prices did not affect the distribution of income, and that changes in the distribution of income had no impact upon market demand. This was put in terms of Engel curves by Gorman [Engel curves describe how an individual’s expenditure on a given good change with income, while relative prices are held constant. If a good is a necessity, then its Engel curve for a given individual will show this good becoming a smaller and smaller component of the individual’s total expenditure as his/her income rises. If it is a luxury, then its Engel curve for a given individual will show this good becoming a larger and component of the individual’s total expenditure as his/her income rises], who was the first economist to discover this result:

we will show that there is just one community indifference locus through each point if, and only if, the Engel curves for different individuals at the same prices are parallel straight lines. (Gorman 1953, p. 63)

Samuelson, three years later, quite correctly described this result as an “impossibility theorem”:

The common sense of this impossibility theorem is easy to grasp. Allocating the same totals differently among people must in general change the resulting equilibrium price ratio. The only exception is where tastes are identical, not only for all men, but also for all men when they are rich or poor. (Samuelson 1956, p. 5. Emphasis added)

Therefore, the “Law of Demand”, which played such an essential role in Marshall’s reasoning, did not apply to the market demand curve. This invalidated the very foundations of Neoclassical theory, of starting its analysis from the subjective utility of an individual consumer, and the profit-maximizing behaviour of an individual firm. Au contraire, it validated the practice of the preceding Classical school—including the much-loathed and derided Karl Marx—of treating society as consisting of social classes (capitalists, landlords, workers, bankers), each with different income sources (profits, rents, wages, interest) and differing consumption patterns (a Protestant abstemious focus on investment, profligate consumption, subsistence, and Veblenian conspicuous consumption). As Alan Kirman later put it:

If we are to progress further we may well be forced to theorise in terms of groups who have collectively coherent behaviour. Thus demand and expenditure functions if they are to be set against reality must be defined at some reasonably high level of aggregation. The idea that we should start at the level of the isolated individual is one which we may well have to abandon. There is no more misleading description in modern economics than the so-called microfoundations of macroeconomics which in fact describe the behaviour of the consumption or production sector by the behaviour of one individual or firm. (Kirman 1989, p. 138. Emphasis added)

This result—now known as the Sonnenschein-Mantel Debreu Theorem—showed that, not only can macroeconomics not be derived from microeconomics, but that even microeconomics itself—the analysis of demand for a single market—cannot be derived from microeconomics—the theory of the consumption behaviour of an individual.

This realisation should have been a moment of revolutionary change for economics. The “marginal revolution”, which rejected the objectively-based theory of value of the Classical School and replaced it with a subjective theory of the individual, had failed its first hurdle, the jump from the analysis of the isolated individual to the aggregate. Though the Classical School had its own problems (Keen 1993a, 1993b), the Neoclassical School was not a viable alternative, but instead was a dead-end.

Needless to say, that is not how Neoclassical economists—even those who discovered this result—reacted. Instead, faced with a result that meant they had to either abandon their paradigm or make ridiculous assumptions to hang onto it, they did the latter.

Gorman’s necessary and sufficient condition noted above means that the very concept of a market system disappears: it is equivalent to assuming that there is only one consumer, and only one commodity. But if there is, how can there be relative prices, let alone a market? Rather than being a model of an actual macroeconomy, this is a model of Robinson Crusoe, alone on his island, where all he can harvest and eat is coconuts.

And yet Gorman described his condition as “intuitively reasonable”, while simultaneously demonstrating that it was absurd:

The necessary and sufficient condition quoted above is intuitively reasonable. It says, in effect, that an extra unit of purchasing power should be spent in the same way no matter to whom it is given. (Gorman 1953, pp. 63-64. Emphasis added)

That is not “intuitively reasonable”: it is intuitively bonkers. Giving “an extra unit of purchasing power” to a billionaire will obviously result in totally different consumption than if it were given to a single mother.

Equally bonkers was Samuelson’s ultimate assumption that the entire economy could be treated as one big, happy family, in which income was redistributed prior to consumption, so that everyone was happy:

If within the family there can be assumed to take place an optimal reallocation of income so as to keep each member’s dollar expenditure of equal ethical worth, then there can be derived for the whole family a set of well-behaved indifference contours relating the totals of what it consumes: the family can be said to act as if it maximizes such a group preference function.

The same argument will apply to all of society if optimal reallocations of income can be assumed to keep the ethical worth of each person’s marginal dollar equal. (Samuelson 1956, p. 21. Bold emphasis added)

If? This is possibly the biggest “if” in the history of academic scholarship. Can this be assumed in the case of the United States of America—possibly the most fractious superpower in the history of human civilisation? The obvious answer is “of course not!”—and this applies to American families, let alone the entire country. But the obvious absurdity of this assumption doesn’t cause Samuelson one iota of worry or doubt. He immediately continued on with:

By means of Hicks’s composite commodity theorem and by other considerations, a rigorous proof is given that the newly defined social or community indifference contours have the regularity properties of ordinary individual preference contours (nonintersection, convexity to the origin, etc.)…

Our analysis gives a first justification to the Wald hypothesis that market totals satisfy the “weak axiom” of individual preference…

The foundation is laid for the “economics of a good society.” (Samuelson 1956, pp. 21-22. Emphasis added)

“A rigorous proof”? After possibly the most absurd assumption ever, even in a discipline renowned for absurd assumptions? It is more rigor mortis of the mind than intellectual rigour, that Samuelson, having proven that his endeavour to derive market demand from a logically coherent theory of individual demand that he developed (Samuelson 1938, 1948) had failed, could not bring himself to accept this, and instead made a manifestly false assumption to hide this contrary result. [This is not an unusual failing, unfortunately: Thomas Kuhn’s The Structure of Scientific Revolutions (Kuhn 1970) shows that this is the norm in science. Once they are committed to a paradigm, it is extremely rare that scientists ever change their minds when confronted with evidence that contradicts it: they instead search for ways to modify the paradigm to cope with the irreconcilable anomaly. Max Planck, the discoverer of quantum mechanics, remarked on this poignantly in his autobiography: “It is one of the most painful experiences of my entire scientific life that I have but seldom—in fact, I might say, never—succeeded in gaining universal recognition for a new result, the truth of which I could demonstrate by a conclusive, albeit only theoretical proof” (Planck 1949, p. 22).]

Bizarrely, if his assumption of “optimal reallocations of income … to keep the ethical worth of each person’s marginal dollar equal” had indeed laid the foundation “for the ‘economics of a good society'”, then Samuelson had proven that this “good society” must have a benevolent dictatorship as its helm, so that those “optimal reallocations” can occur. Capitalism, to behave as Neoclassical economists think it does, must be run by a socialist dictator.

Unfortunately, Gorman’s and Samuelson’s bonkers reactions became the norm, out of which emerged the “representative agent”.

Students are generally given a mendacious explanation of this manifestly absurd construct. In the most childish of such texts—such as Gregory Mankiw’s Principles of Microeconomics (Mankiw 2001)—students are simply told that the market demand curve is the sum of individual demand curves:

MARKET DEMAND AS THE SUM OF INDIVIDUAL DEMANDS. The market demand curve is found by adding horizontally the individual demand curves. At a price of $2, Catherine demands 4 ice-cream cones, and Nicholas demands 3 ice-cream cones. The quantity demanded in the market at this price is 7 cones. (Mankiw 2001, p. 71)

Samuelson’s own textbook—now maintained by William Nordhaus, whose work on climate change is the worst “research” I have ever encountered (Keen 2020)—delivers the most blatant misrepresentation of these results:

Market Demand: Our discussion of demand has so far referred to “the” demand curve. But whose demand is it? Mine? Yours? Everybody’s? The fundamental building block for demand is individual preferences. However, in this chapter we will always focus on the market demand, which represents the sum total of all individual demands. The market demand is what is observable in the real world.

The market demand curve is found by adding together the quantities demanded by all individuals at each price.

Does the market demand curve obey the law of downward-sloping demand? It certainly does. (Samuelson and Nordhaus 2010, p. 48. Boldface emphasis added)

Only slightly less deceptively, Hal Varian’s Intermediate Microeconomics: A Modern Approach implies that the derivation of a market demand curve from individual demand curves is possible, but the process is too difficult for middle-level undergraduates to understand:

Since each individual’s demand for each good depends on prices and his or her money income, the aggregate demand will generally depend on prices and the distribution of incomes. However, it is sometimes convenient to think of the aggregate demand as the demand of some “representative consumer” who has an income that is just the sum of all individual incomes. The conditions under which this can be done are rather restrictive, and a complete discussion of this issue is beyond the scope of this book. (Varian 2010, p. 271. Emphasis added.)

Mas-Colell’s gargantuan postgraduate text Microeconomic Theory (Mas-Colell, Whinston, Green, and El-Hodiri 1996) provides the most honest statement of the theorem. In a section accurately described as “Anything Goes: The Sonnenschein-Mantel-Debreu Theorem“, it states that a market demand curve can have any shape at all:

Can … [an arbitrary function] … coincide with the excess demand function of an economy for every p [price]… Of course … [the arbitrary function] must be continuous, it must be homogeneous of degree zero, and it must satisfy Walras’ law. But for any [arbitrary function] satisfying these three conditions, it turns out that the answer is, again, “yes”. (Mas-Colell, Whinston et al. 1995, p. 602)

But Mas-Colell also uses Samuelson’s escape clause, of a “benevolent central authority” that redistributes income prior to trade, with nary a word on how absurd an assumption this is to apply to a market economy: [This is especially ridiculous coming from a school of thought in economics that champions a libertarian vision of capitalism—that it would be better off with no government intervention whatsoever.]

Let us now hypothesize that there is a process, a benevolent central authority perhaps, that, for any given prices p and aggregate wealth function w, redistributes wealth in order to maximize social welfare

If there is a normative representative consumer, the preferences of this consumer have welfare significance and the aggregate demand function can be used to make welfare judgments by means of the techniques [used for individual consumers]. In doing so however, it should never be forgotten that a given wealth distribution rule [imposed by the “benevolent central authority”] is being adhered to and that the “level of wealth” should always be understood as the “optimally distributed level of wealth”. (Mas-Colell, Whinston, and Green 1995, pp. 117-118. Emphasis added)

It is therefore little wonder that student economists, taught in this fashion ever since 1953, saw no problem with the concept of a “representative agent”, and ultimately built models of the macroeconomy that they believed were consistent with microeconomics.

Older and more realistic hands like Solow could see through this subterfuge, but they were ignored until reality, in the form of the Global Financial Crisis, exposed the inadequacy of the foundations that no amount of adding of “frictions” could overcome.

The DSGE model populates its simplified economy with exactly one single combined worker, owner, consumer, everything else who plans ahead carefully, lives forever; and … there are no conflicts of interest, no incompatible expectations, no deceptions… Under pressure from skeptics and from the need to deal with actual data, DSGE modelers have worked hard to allow for various market frictions and imperfections like rigid prices and wages, asymmetries of information, time lags and so on… But the basic story always treats the whole economy as if it were like a person trying consciously and rationally to do the best it can on behalf of the representative agent, given its circumstances. This cannot be an adequate description of a national economy, which is pretty conspicuously not pursuing a consistent goal. A thoughtful person faced with that economic policy based on that kind of idea might reasonably wonder what planet he or she is on. (Solow 2010, p. 13. Emphasis added)

Despite these existential problems, Neoclassical economists have developed an answer of sorts to Solow and Kirman, with what are now called HANKs: “Heterogeneous Agent New Keynesian” models. These consider at least two different types of agents, with potentially differing income sources, consumption, and expectations (Chari and Kehoe 2008; Kaplan, Moll, and Violante 2018; Acharya and Dogra 2020; Alves, Kaplan, Moll, and Violante 2020; Acharya, Challe, and Dogra 2023). It can at least be asserted that they are treating “demand and expenditure functions … at some reasonably high level of aggregation” (Kirman 1989, p. 138).

But there is no answer to the next issue: the incompatibility of the real-world cost structure of firms with the Neoclassical theory of profit maximization.

Acharya, Sushant, Edouard Challe, and Keshav Dogra. 2023. ‘Optimal Monetary Policy According to HANK’, The American Economic Review, 113: 1741-82.

Acharya, Sushant, and Keshav Dogra. 2020. ‘Understanding Hank: Insights from a Prank’, Econometrica, 88: 1113-58.

Alves, Felipe, Greg Kaplan, Benjamin Moll, and Giovanni L. Violante. 2020. ‘A Further Look at the Propagation of Monetary Policy Shocks in HANK’, Journal of Money, Credit and Banking, 52: 521-59.

Chari, V. V., and Patrick J. Kehoe. 2008. ‘Response from V. V. Chari and Patrick J. Kehoe’, The Journal of Economic Perspectives, 22: 247-50.

Debreu, Gerard. 1974. ‘Excess demand functions’, Journal of mathematical economics, 1: 15-21.

Gorman, W. M. 1953. ‘Community Preference Fields’, Econometrica, 21: 63-80.

Kaplan, Greg, Benjamin Moll, and Giovanni L. Violante. 2018. ‘Monetary Policy According to HANK’, The American Economic Review, 108: 697-743.

Keen, Steve. 1993a. ‘The Misinterpretation of Marx’s Theory of Value’, Journal of the history of economic thought, 15: 282-300.

———. 1993b. ‘Use-Value, Exchange Value, and the Demise of Marx’s Labor Theory of Value’, Journal of the history of economic thought, 15: 107-21.

———. 2011. Debunking economics: The naked emperor dethroned? (Zed Books: London).

———. 2020. ‘The appallingly bad neoclassical economics of climate change’, Globalizations: 1-29.

Kirman, Alan. 1989. ‘The Intrinsic Limits of Modern Economic Theory: The Emperor Has No Clothes’, Economic Journal, 99: 126-39.

Kuhn, Thomas. 1970. The Structure of Scientific Revolutions (University of Chicago Press: Chicago).

Mankiw, N. Gregory. 2001. Principles of Microeconomics (South-Western College Publishers: Stamford).

Mantel, Rolf R. 1974. ‘On the Characterization of Aggregate Excess Demand’, Journal of Economic Theory, 7: 348-53.

———. 1976. ‘Homothetic Preferences and Community Excess Demand Functions’, Journal of Economic Theory, 12: 197-201.

Marshall, Alfred. 1890 [1920]. Principles of Economics ( Library of Economics and Liberty).

Mas-Colell, A., M. D. Whinston, J. R. Green, and M. El-Hodiri. 1996. “Microeconomic Theory.” In, 108-13. Wien: Springer-Verlag.

Mas-Colell, Andreu, Michael Dennis Whinston, and Jerry R. Green. 1995. Microeconomic theory (Oxford University Press: New York :).

Planck, Max. 1949. Scientific Autobiography and Other Papers (Philosophical Library; Williams & Norgate: London).

Samuelson, P. A. 1938. ‘A Note on the Pure Theory of Consumer’s Behaviour’, Economica, 5: 61-71.

———. 1948. ‘Consumption theory in terms of revealed preference’, Economica, 15: 243-53.

Samuelson, Paul A. 1956. ‘Social Indifference Curves’, The Quarterly Journal of Economics, 70: 1-22.

Samuelson, Paul A., and William D. Nordhaus. 2010. Economics (McGraw-Hill: New York).

Shafer, Wayne, and Hugo Sonnenschein. 1982. ‘Chapter 14 Market demand and excess demand functions’, Handbook of Mathematical Economics, 2: 671-93.

Solow, R. M. 2010. “Building a Science of Economics for the Real World.” In House Committee on Science and Technology Subcommittee on Investigations and Oversight. Washington.

Solow, Robert M. 2003. “Dumb and Dumber in Macroeconomics.” In Festschrift for Joe Stiglitz. Columbia University.

Sonnenschein, Hugo. 1972. ‘Market Excess Demand Functions’, Econometrica, 40: 549-63.

———. 1973a. ‘Do Walras’ Identity and Continuity Characterize the Class of Community Excess Demand Functions?’, Journal of Economic Theory, 6: 345-54.

———. 1973b. ‘The Utility Hypothesis and Market Demand Theory’, Western Economic Journal, 11: 404-10.

Varian, Hal R. 2010. Intermediate Microeconomics: A Modern Approach (W. W. Norton & Company: New York).

The Impossibility of Microfoundations for Macroeconomics

This is a quick test of whether Word successfully exports its own inline equations to the Web, after I was informed that inline MathType equations weren’t exported.

One thing which never ceases to bemuse me is the intellectual insularity of mainstream economics.

Every intellectual specialization is, by necessity, insular. Specialization necessarily requires that, to have expert knowledge in one field—say, physics—you must focus on that field to the exclusion of others—for example, chemistry. Given the extent of human knowledge today, this goes far further than it did in the 19th century: the days of the true polymath are well and truly over. There are now specializations within each field, so that a physicist specializing in statistical mechanics will know relatively little about condensed matter physics, for example, and so on in other fields.

But the insularity of mainstream economics goes far beyond this necessary minimum. Though there are a few convenient exceptions who are trotted out to counter generalizations like I am making here, in general, Neoclassicals are blithely unaware of how their own school of thought developed, of empirical and theoretical results that contradict core tenets their own beliefs, of the competing schools of thought within economics, of the development of economics itself over time, and crucially, of intellectual developments that have extended human knowledge in fundamental ways that affect all fields of knowledge—including economics. Foremost here is their ignorance of complexity analysis, which has transformed many fields of study since its re-discovery in meteorology by Edward Lorenz in 1963 (Lorenz 1963).

  1. Complexity

Complexity is often defined by what it is not, so I will attempt a positive definition—which even so, still contains a negative:

A complex system is an often very simple dynamic system which, under certain very common conditions—nonlinear interactions between some of its three or more variables—generates extremely complicated far-from-equilibrium behaviour, which cannot be understood by reducing the system to its component parts (i.e., by reductionism).

Taking each element of this definition in turn:

  • A complex system is not necessarily complicated: Lorenz’s model, which I will discuss shortly, has just three equations and three parameters. Compare that to Ireland’s model, with 10 equations and 14 parameters. Lorenz’s model generates complex dynamics, while Ireland’s far more complicated model does not;
  • The conditions that generate them are very common because in essence, everything is nonlinear. Even a straight line is nonlinear because, unless it starts on one side of the Universe and ends on the other, the very fact that it stops is a nonlinearity. More to the point, an economic system has numerous instances where one variable is multiplied by another—for example, the wage bill is equal to the wage rate (a variable) multiplied by the number of employees (another variable);
  • Nonlinearity is essential, because effects like, for example, multiplying one variable by another, amplify disturbances. In linear models, as Blanchard himself pointed out, all effects are additive: a shock twice as big causes twice as big a disturbance. Nothing amplifies anything else;
  • Three or more variables are needed because, in the type of mathematics which is used to describe complex systems, the dimensionality of the model is equal to the number of variables, and the path your system draws cannot intersect itself. A one-variable system maps to a line, and along a line you can go left, right, or towards the middle without generating the same number twice, but that’s it. A two-variable system maps to a rectangle, and you can either spiral in towards its centre, or spiral in (or out) towards a fixed orbit, but that’s it. With a three-variable system, the dynamics maps to a box, and in a box you can weave incredibly complicated patterns without ever intersecting your path;
  • Far-from-equilibrium behaviour occurs because in such a system, given realistic parameters and initial conditions, one or more of the system’s equilibria can be what are called “strange attractors”: they attract the system from a distance, only to repel it when it gets near the equilibrium. Lorenz’s model has two “strange attractors”, while the economic model that I explain in Chapter 7—and which I first built in 1992—has one (Keen 1995); and
  • Reductionism can’t be used to understand such a model because, as soon as you reduce the system to one (or even two) of its components, the third dimension, which is essential for its complex dynamics, is eliminated.

The great French mathematician Henri Poincare discovered the first complex system when he solved the “three body problem” in 1889, but this was long before humanity developed the capacity to visualise such systems using computers. Though he became rightly famous for solving the three-body problem, his discovery of complexity languished until the phenomenon was rediscovered by the mathematical meteorologist Edward Lorenz in 1963 (Lorenz 1963).

Lorenz was dissatisfied with the weather modelling practices of the late 1950s, which boiled down to both pattern matching (looking for a set of weather events in the past that resembled the pattern of the last few days, and predicting that tomorrow’s weather would be the same as the next day in the historical record) and linear modelling—the practice which, as Blanchard explained in “Where Danger Lurks” (Blanchard 2014), still dominates mainstream macroeconomics today.

Lorenz decided to construct an extremely simplified model of fluid dynamics which preserved the essential nonlinearity of the weather, by reducing the extremely complicated, high-dimensional Navier-Stokes partial-differential equations to just three very simple ordinary differential equations. What he saw ultimately transformed not only meteorology, but almost every field of science. But it has had virtually no impact on economics.

A picture, as they say, is worth 1000 words, so Figure 4 shows a picture of Lorenz’s model (Lorenz 1963)—rendered in the Open-Source system dynamics program Minsky, which I have developed to enable complex systems modelling in economics.

Figure 4: Lorenz’s “strange attractor” model of turbulent flow

The system has just three variables (x, y and z) and three parameters (a, b, and c) and just two nonlinear interactions: in the equation for y, x is multiplied by minus z, and in the equation for z, x is multiplied by y—see Equation :

        

And yet, despite this simplicity, the pattern generated by the system is incredibly complicated—and indeed, beautiful. It was called “The Butterly Effect” for more reasons than one.

It could also have been called “The Mask of Zorro”, given the existence of two “eyes” in the phase plots for x against y, y against z, and z against x. Tellingly for equilibrium-obsessed economists, these eyes are in fact two of the three equilibria of the system. The third is where , which were the initial condition of the simulation shown in Figure 4. I then nudged the x-value 0.001 away from its equilibrium at the ten second mark. After this disturbance, the system was propelled away from this unstable equilibrium towards the other two—the “eyes” in the three phase plots. These equilibria are “strange attractors”, which means that they describe regions that the system will never reach—even though they are also equilibria of the system.

This simulation shows that all three equilibria of Lorenz’s system are unstable: if the system starts at an equilibrium, it will remain there, but if it starts anywhere else, or is disturbed from the equilibrium eve by an infinitesimal distance, it will be propelled away from it, and forever display far-from-equilibrium dynamics.

And yet it does not “break down”: the simulation returns realistic values that stay within the bounds of the system. This contrasts strongly with the presumption once expressed by Hicks in relation to Harrod’s “knife-edge” model of economic instability (Harrod 1939, 1948), that models must assume stable equilibria, because a model with an unstable equilibrium “does not fluctuate; it just breaks down”:

Mr. Harrod … welcomes the instability of his system, because he believes it to be an explanation of the tendency to fluctuation which exists in the real world… But mathematical instability does not in itself elucidate fluctuation. A mathematically unstable system does not fluctuate; it just breaks down. (Hicks 1949, p. 108)

This still prevalent belief amongst economists is only true of linear systems—and even then, not all of them (Keen 2020). But it is categorically false about the behaviour of nonlinear systems.

Finally, not only is the pattern beautiful, it is also aperiodic: no one cycle is like any other. Before Lorenz’s work, scientists thought that aperiodic cycles would require exogenous shocks; after Lorenz’s work, only economists continue to assume that random shocks to a system are needed to cause aperiodicity.

It is to the great credit of meteorology that, very rapidly, Lorenz’s demonstration of the necessity of nonlinear, far-from-equilibrium modelling was accepted by meteorologists. There is much more to weather modelling than just Lorenz’s complex systems foundation, but his work contributed fundamentally to the dramatic increase in the accuracy of weather forecasts over the last half century—even in the face of global warming that is disturbing the underlying climate which determines the weather.

Likewise, Lorenz’s discovery was considered and applied by all manner of sciences, leading to complexity analysis operating as an important adjunct to the reductionist approach that remains the bedrock of scientific analysis. In 1999, the journal Science recognised this with a special issue devoted to complexity in numerous fields: Physics (Goldenfeld and Kadanoff 1999), Chemistry (Whitesides and Ismagilov 1999), Biology (Parrish and Edelstein-Keshet 1999; Pennisi 1999; Weng, Bhalla, and Iyengar 1999), Evolution (Service 1999), Geography (Werner 1999)—and even Economics (Arthur 1999).

Today, complex systems are an uncontroversial aspect of every science—but not of economics, because the dominant methods in economics are antithetical to the foundations of complexity. These methods include linearity, as Blanchard acknowledged, but more crucially, they involve a perverted form of reductionism that Physics Nobel Laureate Philip Anderson christened “Constructionism”.

  1. The impossibility of constructionism with complex systems

Anderson’s “More is Different” (Anderson 1972) attacked the idea that, in line with Ernest Rutherford’s quip that “all science is either physics or stamp collecting”, higher-level sciences—like chemistry, biology and even psychology—can and should be reduced to applied physics. Speaking as someone who had made fundamental contributions to particle physics, Anderson asserted that, though “The reductionist hypothesis … among the great majority of active scientists … is accepted without question”, this did not mean that higher-level sciences like chemistry could be generated from what we know about physics. “The main fallacy in this kind of thinking”, he declared:

is that the reductionist hypothesis does not by any means imply a “constructionist” one: The ability to reduce everything to simple fundamental laws does not imply the ability to start from those laws and reconstruct the universe. (Anderson 1972, p. 393. Emphasis added)

The phenomena that made this approach untenable were “the twin difficulties of scale and complexity”, since:

The behavior of large and complex aggregates of elementary particles, it turns out, is not to be understood in terms of a simple extrapolation of the properties of a few particles. Instead, at each level of complexity entirely new properties appear, and the understanding of the new behaviors requires research which I think is as fundamental in its nature as any other…

At each stage entirely new laws, concepts, and generalizations are necessary, requiring inspiration and creativity to just as great a degree as in the previous one. Psychology is not applied biology, nor is biology applied chemistry. (Anderson 1972, p. 393)

And nor is macroeconomics applied microeconomics—but mainstream economists, because of their extreme insularity, haven’t gotten Anderson’s memo. They continue to attempt to do the impossible: to construct the higher-level analysis of macroeconomics via a direct application of the lower-level analysis of microeconomics. That is only possible if all relationships in macroeconomics are linear as Blanchard described them in “Where Danger Lurks” (Blanchard 2014), when the lesson of the Global Financial Crisis was that they obviously are not—and in a subsequent chapter, I will show that there are fundamental nonlinearities in macroeconomics that should be embraced, rather than ignored.

Fittingly, Anderson concluded with two anecdotes from economics:

In closing, I offer two examples from economics of what I hope to have said. Marx said that quantitative differences become qualitative ones, but a dialogue in Paris in the 1920’s sums it up even more clearly:

FITZGERALD: The rich are different from us.

HEMINGWAY: Yes, they have more money. (Anderson 1972, p. 396)

Marx, and money, are two other things that Neoclassical economists ignore. But even more critically, they ignore the logical and empirical fallacies that beset microeconomics. Even if it were possible to derive macroeconomics from microeconomics, Neoclassical microeconomics is not the foundation one should use, because it is manifestly wrong about both consumption and production. Some Neoclassicals are aware of the logical problems with their model of consumption—though their reactions to it are bizarre. But none of them are aware of the empirical fallacies in their model of production.

References

 

Anderson, P. W. 1972. ‘More Is Different: Broken symmetry and the nature of the hierarchical structure of science’, Science, 177: 393-96.

Arthur, W. Brian. 1999. ‘Complexity and the Economy’, Science, 284: 107-09.

Barro, Robert J. 1989. ‘The Ricardian Approach to Budget Deficits’, The Journal of Economic Perspectives, 3: 37-54.

Blanchard, Olivier. 2014. ‘Where Danger Lurks’, Finance & Development, 51.

Goldenfeld, Nigel, and Leo P. Kadanoff. 1999. ‘Simple Lessons from Complexity’, Science, 284: 87-89.

Harrod, R. F. 1939. ‘An Essay in Dynamic Theory’, The Economic Journal, 49: 14-33.

———. 1948. Towards a Dynamic Economics (Macmillan: London).

Hicks, J. R. 1949. ‘Mr. Harrod’s Dynamic Theory’, Economica, 16: 106-21.

Keen, Steve. 1995. ‘Finance and Economic Breakdown: Modeling Minsky’s ‘Financial Instability Hypothesis.”, Journal of Post Keynesian Economics, 17: 607-35.

———. 2020. ‘Burying Samuelson’s Multiplier-Accelerator and resurrecting Goodwin’s Growth Cycle in Minsky.’ in Robert Y. Cavana, Brian C. Dangerfield, Oleg V. Pavlov, Michael J. Radzicki and I. David Wheat (eds.), Feedback Economics : Applications of System Dynamics to Issues in Economics (Springer: New York).

Li, Tien-Yien, and James A. Yorke. 1975. ‘Period Three Implies Chaos’, The American Mathematical Monthly, 82: 985-92.

Lorenz, Edward N. 1963. ‘Deterministic Nonperiodic Flow’, Journal of the Atmospheric Sciences, 20: 130-41.

Mendelsohn, Robert, William D. Nordhaus, and Daigee Shaw. 1994. ‘The Impact of Global Warming on Agriculture: A Ricardian Analysis’, The American Economic Review, 84: 753-71.

Parrish, Julia K., and Leah Edelstein-Keshet. 1999. ‘Complexity, Pattern, and Evolutionary Trade-Offs in Animal Aggregation’, Science, 284: 99-101.

Pennisi, Elizabeth. 1999. ‘Unraveling Bacteria’s Dependable Homing System’, Science, 284: 82-82.

Ricardo, David, and Piero Sraffa. 1952. The works and correspondence of David Ricardo / edited by Piero Sraffa, with the collaboration of M.H. Dobb Vol.6, Letters, 1810-1815 (Cambridge University Press for the Royal Economic Society: Cambridge).

Service, Robert F. 1999. ‘Exploring the Systems of Life’, Science, 284: 80-83.

Weng, Gezhi, Upinder S. Bhalla, and Ravi Iyengar. 1999. ‘Complexity in Biological Signaling Systems’, Science, 284: 92-96.

Werner, B. T. 1999. ‘Complexity in Natural Landform Patterns’, Science, 284: 102-04.

Whitesides, George M., and Rustem F. Ismagilov. 1999. ‘Complexity in Chemistry’, Science, 284: 89-92.

 

The Impossibility of Microfoundations for Macroeconomics

One thing which never ceases to bemuse me is the intellectual insularity of mainstream economics.

Every intellectual specialization is, by necessity, insular. Specialization necessarily requires that, to have expert knowledge in one field—say, physics—you must focus on that field to the exclusion of others—for example, chemistry. Given the extent of human knowledge today, this goes far further than it did in the 19th century: the days of the true polymath are well and truly over. There are now specializations within each field, so that a physicist specializing in statistical mechanics will know relatively little about condensed matter physics, for example, and so on in other fields.

But the insularity of mainstream economics goes far beyond this necessary minimum. Though there are a few convenient exceptions who are trotted out to counter generalizations like I am making here, in general, Neoclassicals are blithely unaware of how their own school of thought developed, of empirical and theoretical results that contradict core tenets their own beliefs, of the competing schools of thought within economics, of the development of economics itself over time, and crucially, of intellectual developments that have extended human knowledge in fundamental ways that affect all fields of knowledge—including economics. Foremost here is their ignorance of complexity analysis, which has transformed many fields of study since its re-discovery in meteorology by Edward Lorenz in 1963 (Lorenz 1963).

  1. Complexity

Complexity is often defined by what it is not, so I will attempt a positive definition—which even so, still contains a negative:

A complex system is an often very simple dynamic system which, under certain very common conditions—nonlinear interactions between some of its three or more variables—generates extremely complicated far-from-equilibrium behaviour, which cannot be understood by reducing the system to its component parts (i.e., by reductionism).

Taking each element of this definition in turn:

  • A complex system is not necessarily complicated: Lorenz’s model, which I will discuss shortly, has just three equations and three parameters. Compare that to Ireland’s model, with 10 equations and 14 parameters. Lorenz’s model generates complex dynamics, while Ireland’s far more complicated model does not;
  • The conditions that generate them are very common because in essence, everything is nonlinear. Even a straight line is nonlinear because, unless it starts on one side of the Universe and ends on the other, the very fact that it stops is a nonlinearity. More to the point, an economic system has numerous instances where one variable is multiplied by another—for example, the wage bill is equal to the wage rate (a variable) multiplied by the number of employees (another variable);
  • Nonlinearity is essential, because effects like, for example, multiplying one variable by another, amplify disturbances. In linear models, as Blanchard himself pointed out, all effects are additive: a shock twice as big causes twice as big a disturbance. Nothing amplifies anything else;
  • Three or more variables are needed because, in the type of mathematics which is used to describe complex systems, the dimensionality of the model is equal to the number of variables, and the path your system draws cannot intersect itself. A one-variable system maps to a line, and along a line you can go left, right, or towards the middle without generating the same number twice, but that’s it. A two-variable system maps to a rectangle, and you can either spiral in towards its centre, or spiral in (or out) towards a fixed orbit, but that’s it. With a three-variable system, the dynamics maps to a box, and in a box you can weave incredibly complicated patterns without ever intersecting your path;
  • Far-from-equilibrium behaviour occurs because in such a system, given realistic parameters and initial conditions, one or more of the system’s equilibria can be what are called “strange attractors”: they attract the system from a distance, only to repel it when it gets near the equilibrium. Lorenz’s model has two “strange attractors”, while the economic model that I explain in Chapter 7—and which I first built in 1992—has one (Keen 1995); and
  • Reductionism can’t be used to understand such a model because, as soon as you reduce the system to one (or even two) of its components, the third dimension, which is essential for its complex dynamics, is eliminated.

The great French mathematician Henri Poincare discovered the first complex system when he solved the “three body problem” in 1889, but this was long before humanity developed the capacity to visualise such systems using computers. Though he became rightly famous for solving the three-body problem, his discovery of complexity languished until the phenomenon was rediscovered by the mathematical meteorologist Edward Lorenz in 1963 (Lorenz 1963).

Lorenz was dissatisfied with the weather modelling practices of the late 1950s, which boiled down to both pattern matching (looking for a set of weather events in the past that resembled the pattern of the last few days, and predicting that tomorrow’s weather would be the same as the next day in the historical record) and linear modelling—the practice which, as Blanchard explained in “Where Danger Lurks” (Blanchard 2014), still dominates mainstream macroeconomics today.

Lorenz decided to construct an extremely simplified model of fluid dynamics which preserved the essential nonlinearity of the weather, by reducing the extremely complicated, high-dimensional Navier-Stokes partial-differential equations to just three very simple ordinary differential equations. What he saw ultimately transformed not only meteorology, but almost every field of science. But it has had virtually no impact on economics.

A picture, as they say, is worth 1000 words, so Figure 4 shows a picture of Lorenz’s model (Lorenz 1963)—rendered in the Open-Source system dynamics program Minsky, which I have developed to enable complex systems modelling in economics.

Figure 4: Lorenz’s “strange attractor” model of turbulent flow

The system has just three variables (x, y and z) and three parameters (a, b, and c) and just two nonlinear interactions: in the equation for y, x is multiplied by minus z, and in the equation for z, x is multiplied by y—see Equation :

        

And yet, despite this simplicity, the pattern generated by the system is incredibly complicated—and indeed, beautiful. It was called “The Butterly Effect” for more reasons than one.

It could also have been called “The Mask of Zorro”, given the existence of two “eyes” in the phase plots for x against y, y against z, and z against x. Tellingly for equilibrium-obsessed economists, these eyes are in fact two of the three equilibria of the system. The third is where , which were the initial condition of the simulation shown in Figure 4. I then nudged the x-value 0.001 away from its equilibrium at the ten second mark. After this disturbance, the system was propelled away from this unstable equilibrium towards the other two—the “eyes” in the three phase plots. These equilibria are “strange attractors”, which means that they describe regions that the system will never reach—even though they are also equilibria of the system.

This simulation shows that all three equilibria of Lorenz’s system are unstable: if the system starts at an equilibrium, it will remain there, but if it starts anywhere else, or is disturbed from the equilibrium eve by an infinitesimal distance, it will be propelled away from it, and forever display far-from-equilibrium dynamics.

And yet it does not “break down”: the simulation returns realistic values that stay within the bounds of the system. This contrasts strongly with the presumption once expressed by Hicks in relation to Harrod’s “knife-edge” model of economic instability (Harrod 1939, 1948), that models must assume stable equilibria, because a model with an unstable equilibrium “does not fluctuate; it just breaks down”:

Mr. Harrod … welcomes the instability of his system, because he believes it to be an explanation of the tendency to fluctuation which exists in the real world… But mathematical instability does not in itself elucidate fluctuation. A mathematically unstable system does not fluctuate; it just breaks down. (Hicks 1949, p. 108)

This still prevalent belief amongst economists is only true of linear systems—and even then, not all of them (Keen 2020). But it is categorically false about the behaviour of nonlinear systems.

Finally, not only is the pattern beautiful, it is also aperiodic: no one cycle is like any other. Before Lorenz’s work, scientists thought that aperiodic cycles would require exogenous shocks; after Lorenz’s work, only economists continue to assume that random shocks to a system are needed to cause aperiodicity.

It is to the great credit of meteorology that, very rapidly, Lorenz’s demonstration of the necessity of nonlinear, far-from-equilibrium modelling was accepted by meteorologists. There is much more to weather modelling than just Lorenz’s complex systems foundation, but his work contributed fundamentally to the dramatic increase in the accuracy of weather forecasts over the last half century—even in the face of global warming that is disturbing the underlying climate which determines the weather.

Likewise, Lorenz’s discovery was considered and applied by all manner of sciences, leading to complexity analysis operating as an important adjunct to the reductionist approach that remains the bedrock of scientific analysis. In 1999, the journal Science recognised this with a special issue devoted to complexity in numerous fields: Physics (Goldenfeld and Kadanoff 1999), Chemistry (Whitesides and Ismagilov 1999), Biology (Parrish and Edelstein-Keshet 1999; Pennisi 1999; Weng, Bhalla, and Iyengar 1999), Evolution (Service 1999), Geography (Werner 1999)—and even Economics (Arthur 1999).

Today, complex systems are an uncontroversial aspect of every science—but not of economics, because the dominant methods in economics are antithetical to the foundations of complexity. These methods include linearity, as Blanchard acknowledged, but more crucially, they involve a perverted form of reductionism that Physics Nobel Laureate Philip Anderson christened “Constructionism”.

  1. The impossibility of constructionism with complex systems

Anderson’s “More is Different” (Anderson 1972) attacked the idea that, in line with Ernest Rutherford’s quip that “all science is either physics or stamp collecting”, higher-level sciences—like chemistry, biology and even psychology—can and should be reduced to applied physics. Speaking as someone who had made fundamental contributions to particle physics, Anderson asserted that, though “The reductionist hypothesis … among the great majority of active scientists … is accepted without question”, this did not mean that higher-level sciences like chemistry could be generated from what we know about physics. “The main fallacy in this kind of thinking”, he declared:

is that the reductionist hypothesis does not by any means imply a “constructionist” one: The ability to reduce everything to simple fundamental laws does not imply the ability to start from those laws and reconstruct the universe. (Anderson 1972, p. 393. Emphasis added)

The phenomena that made this approach untenable were “the twin difficulties of scale and complexity”, since:

The behavior of large and complex aggregates of elementary particles, it turns out, is not to be understood in terms of a simple extrapolation of the properties of a few particles. Instead, at each level of complexity entirely new properties appear, and the understanding of the new behaviors requires research which I think is as fundamental in its nature as any other…

At each stage entirely new laws, concepts, and generalizations are necessary, requiring inspiration and creativity to just as great a degree as in the previous one. Psychology is not applied biology, nor is biology applied chemistry. (Anderson 1972, p. 393)

And nor is macroeconomics applied microeconomics—but mainstream economists, because of their extreme insularity, haven’t gotten Anderson’s memo. They continue to attempt to do the impossible: to construct the higher-level analysis of macroeconomics via a direct application of the lower-level analysis of microeconomics. That is only possible if all relationships in macroeconomics are linear as Blanchard described them in “Where Danger Lurks” (Blanchard 2014), when the lesson of the Global Financial Crisis was that they obviously are not—and in a subsequent chapter, I will show that there are fundamental nonlinearities in macroeconomics that should be embraced, rather than ignored.

Fittingly, Anderson concluded with two anecdotes from economics:

In closing, I offer two examples from economics of what I hope to have said. Marx said that quantitative differences become qualitative ones, but a dialogue in Paris in the 1920’s sums it up even more clearly:

FITZGERALD: The rich are different from us.

HEMINGWAY: Yes, they have more money. (Anderson 1972, p. 396)

Marx, and money, are two other things that Neoclassical economists ignore. But even more critically, they ignore the logical and empirical fallacies that beset microeconomics. Even if it were possible to derive macroeconomics from microeconomics, Neoclassical microeconomics is not the foundation one should use, because it is manifestly wrong about both consumption and production. Some Neoclassicals are aware of the logical problems with their model of consumption—though their reactions to it are bizarre. But none of them are aware of the empirical fallacies in their model of production.

References

 

Anderson, P. W. 1972. ‘More Is Different: Broken symmetry and the nature of the hierarchical structure of science’, Science, 177: 393-96.

Arthur, W. Brian. 1999. ‘Complexity and the Economy’, Science, 284: 107-09.

Barro, Robert J. 1989. ‘The Ricardian Approach to Budget Deficits’, The Journal of Economic Perspectives, 3: 37-54.

Blanchard, Olivier. 2014. ‘Where Danger Lurks’, Finance & Development, 51.

Goldenfeld, Nigel, and Leo P. Kadanoff. 1999. ‘Simple Lessons from Complexity’, Science, 284: 87-89.

Harrod, R. F. 1939. ‘An Essay in Dynamic Theory’, The Economic Journal, 49: 14-33.

———. 1948. Towards a Dynamic Economics (Macmillan: London).

Hicks, J. R. 1949. ‘Mr. Harrod’s Dynamic Theory’, Economica, 16: 106-21.

Keen, Steve. 1995. ‘Finance and Economic Breakdown: Modeling Minsky’s ‘Financial Instability Hypothesis.”, Journal of Post Keynesian Economics, 17: 607-35.

———. 2020. ‘Burying Samuelson’s Multiplier-Accelerator and resurrecting Goodwin’s Growth Cycle in Minsky.’ in Robert Y. Cavana, Brian C. Dangerfield, Oleg V. Pavlov, Michael J. Radzicki and I. David Wheat (eds.), Feedback Economics : Applications of System Dynamics to Issues in Economics (Springer: New York).

Li, Tien-Yien, and James A. Yorke. 1975. ‘Period Three Implies Chaos’, The American Mathematical Monthly, 82: 985-92.

Lorenz, Edward N. 1963. ‘Deterministic Nonperiodic Flow’, Journal of the Atmospheric Sciences, 20: 130-41.

Mendelsohn, Robert, William D. Nordhaus, and Daigee Shaw. 1994. ‘The Impact of Global Warming on Agriculture: A Ricardian Analysis’, The American Economic Review, 84: 753-71.

Parrish, Julia K., and Leah Edelstein-Keshet. 1999. ‘Complexity, Pattern, and Evolutionary Trade-Offs in Animal Aggregation’, Science, 284: 99-101.

Pennisi, Elizabeth. 1999. ‘Unraveling Bacteria’s Dependable Homing System’, Science, 284: 82-82.

Ricardo, David, and Piero Sraffa. 1952. The works and correspondence of David Ricardo / edited by Piero Sraffa, with the collaboration of M.H. Dobb Vol.6, Letters, 1810-1815 (Cambridge University Press for the Royal Economic Society: Cambridge).

Service, Robert F. 1999. ‘Exploring the Systems of Life’, Science, 284: 80-83.

Weng, Gezhi, Upinder S. Bhalla, and Ravi Iyengar. 1999. ‘Complexity in Biological Signaling Systems’, Science, 284: 92-96.

Werner, B. T. 1999. ‘Complexity in Natural Landform Patterns’, Science, 284: 102-04.

Whitesides, George M., and Rustem F. Ismagilov. 1999. ‘Complexity in Chemistry’, Science, 284: 89-92.

 

Soul-searching by a soulless discipline

The dominance of micro-founded macroeconomic models—models derived directly from the microeconomic concepts of utility-maximizing individuals and profit-maximizing firms, and based on the Ramsey Neoclassical growth model (Ramsey 1928)—did not go unchallenged prior to the Global Financial Crisis. But the critics were treated in the time-honoured Neoclassical way, of being both ignored and disparaged—if they were, like me, not Neoclassicals themselves—or politely listened to but still effectively ignored, if they were.

Pre-eminent amongst the tolerated critics was Robert Solow, a recipient of the “Nobel” Prize in Economics in 1987 for his work on a Neoclassical theory of economic growth (Solow 1956). In a series of papers (Solow 1994, 2001, 2003; Solow 2006; Solow 2007, 2008), Solow railed against the very idea of building macroeconomic analysis on the foundation of Ramsey’s growth model.

At a Festschrift for another economics “Nobel” recipient, Joseph Stiglitz, Solow delivered a dismissive judgment on micro-founded macroeconomics in a paper provocatively entitled “Dumb and Dumber in Macroeconomics”. Solow began with the question of “So how did macroeconomics arrive at its current state? The answer might provide a lead as to where it ought to go”. He continued:

The original impulse to look for better or more explicit micro foundations was probably reasonable… What emerged was not a good idea. The preferred model has a single representative consumer optimizing over infinite time with perfect foresight or rational expectations, in an environment that realizes the resulting plans more or less flawlessly through perfectly competitive forward-looking markets for goods and labor, and perfectly flexible prices and wages.

How could anyone expect a sensible short-to-medium-run macroeconomics to come out of that set-up? (Solow 2003. Emphasis added)

He also disparaged the assumption of equilibrium through time—which is imposed on a model that in fact has an unstable equilibrium—stating that “This choice between equilibrium and disequilibrium thinking may be a false choice”. He continued with the colourful metaphor that:

If I drop a ripe watermelon from this 15th-floor window, I suppose the whole process from t0 to the mess on the sidewalk could be described as some sort of dynamic equilibrium. But that may not be the most fruitful—sorry—way to describe the falling-watermelon phenomenon. (Solow 2003)

When the crisis hit, Solow was one of several economists invited by the US Congress’s House Committee on Science and Technology Subcommittee on Investigations and Oversight to explain what when wrong, in a hearing entitled “Building a Science of Economics for the Real World”. His testimony, as colourful as ever, highlighted a key problem for economics, that people schooled in this tradition had largely lost the capacity for critical thought about it:

every proposition must pass the smell test: does this really make sense? I do not think that the currently popular DSGE models pass the smell test. They take it for granted that the whole economy can be thought about as if it were a single, consistent person or dynasty carrying out a rationally designed, long-term plan, occasionally disturbed by unexpected shocks, but adapting to them in a rational, consistent way. I do not think that this picture passes the smell test… The advocates no doubt believe what they say, but they seem to have stopped sniffing or to have lost their sense of smell altogether. (Solow 2010. Emphasis added)

Solow’s quip that the advocates of modern Neoclassical macroeconomic modelling had “lost their sense of smell altogether” neatly characterized the debate that ensued amongst these economists in the aftermath to the Global Financial Crisis. They could not deny that the crisis had happened, but likewise they could not contemplate that their models—which had not only not seen it coming, but had predicted a bountiful economic harvest, when a famine ensued—could possibly be wrong. Their dialogue resembled men—and they are almost exclusively men—without a sense of smell, trying to distinguish the aroma of a rose garden from the stink of a sewer.

My favorite “representative agent” in this journey of non-discovery is Olivier Blanchard. Blanchard was the “Class of 1941” Professor of Economics at MIT from 1994 till 2010, Chair of Department from 1998 till 2003, Chief Economist of the IMF from September 2008 till 2015, Robert M. Solow Professor of Economics at MIT from 2010-2014 (which is somewhat ironic, given his vastly different opinion of DSGE models to Solow’s), and the President of the American Economic Association in 2018. The only major mainstream economic guernsey he lacks is a “Nobel” Prize.

He began his journey in blissful ignorance of the economic crisis unfolding around him. In August 2008, Blanchard self-published an NBER working paper with the title “The State of Macro”, in which he declared that “The state of macro is good”. Starting with a portrayal of the initial conflicts between “New Classicals” and “New Keynesians”, he opined that:

there has been enormous progress and substantial convergence. For a while—too long a while—the field looked like a battlefield. Researchers split in different directions, mostly ignoring each other, or else engaging in bitter fights and controversies. Over time however, largely because facts have a way of not going away, a largely shared vision both of fluctuations and of methodology has emerged. Not everything is fine. Like all revolutions, this one has come with the destruction of some knowledge, and suffers from extremism, herding, and fashion. But none of this is deadly. The state of macro is good. (Blanchard 2008, p. 2)

To call this blind ignorance is to insult the unsighted. The crisis is regarded as having started on August 9th, 2007—precisely a year before he uploaded this paper—when BNP Paribas Investment Partners shut down redemptions from three of its investment funds that were based on the US housing market. Figure 1 also shows that the rate of economic growth peaked in 2006 Q4 (at 4.7% in the Rest of the World and 3.7% in the USA). By the time of the BNP Paribas announcement, growth in the USA had faltered to 2.3%, in the subsequent quarter (2007 Q4) it was 0.2%. By the third quarter of 2008—which includes August, when Blanchard released his paper—it was minus 2%.

Perhaps in atonement for this monumentally badly-timed and false homage to mainstream economics, Blanchard subsequently published a string of papers that tried to assess why the state of macro was, in fact, extremely bad, and to propose what might be done to fix it (Blanchard 2014, 2016a, 2016b, 2018).

His first sortie, published in the IMF’s semi-populist journal Finance and Development, had the somewhat cartoonish title “Where Danger Lurks” (Blanchard 2014), and it was accompanied by a cartoon demon, as shown in Figure 3. Nonetheless, this paper had the most perceptive observations about the failure of macroeconomic theory that he managed to make. He focused on the assumption that economic fluctuations were linear—”so that small shocks had small effects and a shock twice as big as another had twice the effect”:

Until the 2008 global financial crisis, mainstream U.S. macroeconomics had taken an increasingly benign view of economic fluctuations in output and employment. The crisis has made it clear that this view was wrong and that there is a need for a deep reassessment…

The techniques we use affect our thinking in deep and not always conscious ways… These techniques however made sense only under a vision in which economic fluctuations were regular enough so that, by looking at the past, people and firms … could understand their nature and form expectations of the future, and simple enough so that small shocks had small effects and a shock twice as big as another had twice the effect on economic activity…

We in the field did think of the economy as roughly linear, constantly subject to different shocks, constantly fluctuating, but naturally returning to its steady state over time… Whatever caused the Great Moderation, for a quarter century the benign, linear view of fluctuations looked fine… That small shocks could sometimes have large effects and, as a result, that things could turn really bad, was not completely ignored by economists. But such an outcome was thought to be a thing of the past that would not happen again… (Blanchard 2014, p. 28)

 

Figure 3: Blanchard’s first, and deepest, consideration of why macroeconomic theory failed

Apart from these valid insights, the paper was more notable for its illustrations than any intellectual revolution in its content. Blanchard’s main policy advice was that we should “Stay away from dark corners” (Blanchard 2014, p. 31), but he gave no means by which “dark corners” could be identified. Though he called for research to “let a hundred flowers bloom”:

Now that we are more aware of nonlinearities and the dangers they pose, we should explore them further theoretically and empirically—and in all sorts of models. (Blanchard 2014, p. 31)

He also made the bizarre argument that if—somehow, and without any guidance from economic theory—policymakers could “maintain a healthy distance from dark corners”, then it would be OK for economic theory to march on unaltered:

But this answer skirts a harder question: How should we modify our benchmark models—the so-called dynamic stochastic general equilibrium (DSGE) models that we use, for example, at the IMF to think about alternative scenarios and to quantify the effects of policy decisions? The easy and uncontroversial part of the answer is that the DSGE models should be expanded to better recognize the role of the financial system—and this is happening. But should these models be able to describe how the economy behaves in the dark corners?

Let me offer a pragmatic answer. If macroeconomic policy and financial regulation are set in such a way as to maintain a healthy distance from dark corners, then our models that portray normal times may still be largely appropriate…Trying to create a model that integrates normal times and systemic risks may be beyond the profession’s conceptual and technical reach at this stage (Blanchard 2014, p. 31. Emphasis added)

How on Earth could policymakers “maintain a healthy distance from dark corners” if they had no theoretical guidance as to where they were? And if they could work it out for themselves by empirical observation, then what need was there for economists in the first place?

The real dark corner from which Blanchard was retreating was the prospect that the Neoclassical paradigm was in fact fundamentally wrong about the nature of the macroeconomy.

His next paper began with sound criticisms of DSGE models for being “based on unappealing assumptions. Not just simplifying assumptions, as any model must, but assumptions profoundly at odds with what we know about consumers and firms” (Blanchard 2016a, p. 1). But by the end, he could see no alternative to the core of DSGE modelling, of deriving macroeconomics from microeconomic foundations:

The pursuit of a widely accepted analytical macroeconomic core, in which to locate discussions and extensions, may be a pipe dream, but it is a dream surely worth pursuing. If so… Starting from explicit microfoundations is clearly essential; where else to start from? Ad hoc equations will not do for that purpose. Thinking in terms of a set of distortions to a competitive economy implies a long slog from the competitive model to a reasonably plausible description of the economy. But, again, it is hard to see where else to start from. (Blanchard 2016a, p. 3. Emphasis added)

Blanchard’s final word on the need to reform economic theory was written after interactions with a number of economists, including me:

A number of economists joined the debate about the pros and cons of dynamic DSGEs, partly in response to my blog post. Among them were Narayana Kocherlakota (2016), Simon Wren-Lewis (2016), Paul Romer (2016), Steve Keen (2016), Anton Korinek (2015), Paul Krugman (2016), Noah Smith (2016), Roger Farmer (2014), and Brad Delong (2016)…

In a sign of how incapable mainstream economists are of comprehending fundamental challenges to their methodology, he followed up this acknowledgment with this putative summary of agreed positions:

I believe that there is wide agreement on the following three propositions; let us not discuss them further, and move on.

i) Macroeconomics is about general equilibrium… (Blanchard 2018, p. 49. Emphasis added)

I was literally gobsmacked by this alleged point of agreement, and said so at the time, but to no avail. Far from agreeing that “Macroeconomics is about general equilibrium”, in the post of mine that Blanchard cited, I had argued that nonlinear, far-from-equilibrium dynamics had to be the basis of macroeconomic modelling:

Imposing linearity on a nonlinear system is a valid procedure if, and only if, the equilibrium around which the model is linearized is stable… The mathematically more valid approach is to accept that, if your model’s equilibria are unstable, then your model will display far-from-equilibrium dynamics, rather than oscillating about and converging on an equilibrium. This requires you to understand and apply techniques from complex systems analysis, which is much more sophisticated than the mathematics Neoclassical modelers use.

Just as Blanchard ultimately meandered back to DSGE modelling, so did Neoclassical economics: fifteen years after the crisis, DSGE models remain the dominant methodology in macroeconomic modelling. It is as if the crisis itself never occurred. All that has happened is that some modellers have calibrated their models to ex-post fit the crisis, as if that is a sufficient response.

This process began very soon after the crisis, with Peter Ireland’s paper “A New Keynesian Perspective on the Great Recession” (Ireland 2011). Though he began by admitting that “the Great Recession’s extreme severity makes it tempting to argue that new theories are required to fully explain it” (Ireland 2011, p. 31), he quickly disparaged what I will shortly show is in fact the correct approach—”Attempts to explain movements in one set of endogenous variables, like GDP and employment, by direct appeal to movements in another, like asset market valuations or interest rates, sometimes make for decent journalism but rarely produce satisfactory economic insights” (Ireland 2011, p. 32)—and moved back to the bread and butter of DSGE modelling: explaining all macroeconomic phenomena as being due to “exogenous shocks” disturbing a fundamentally stable economic system.

His conclusion, after developing and numerically solving a “small-scale model” (Ireland 2011, p. 52)—which had ten equations and 14 exogenous parameters, and was subjected to four types of exogenous shocks, to consumer preferences, production costs, technology and monetary policy—was that the difference between the worst economic crisis since the Great Depression, and the two relatively mild recessions that preceded it, was that the shocks that caused the “Great Recession” lasted longer and grew bigger over time:

the Great Recession began in late 2007 and early 2008 with a series of adverse preference and technology shocks in roughly the same mix and of roughly the same magnitude as those that hit the United States at the onset of the previous two recessions…

The string of adverse preference and technology shocks continued, however, throughout 2008 and into 2009. Moreover, these shocks grew larger in magnitude, adding substantially not just to the length but also to the severity of the great recession. (Ireland 2011, p. 48)

Ireland concluded that “All of these results indicate that the basic New Keynesian model continues to serve as a reliable guide for business cycle analysis and monetary policy evaluation” (Ireland 2011, p. 52).

A more sensible conclusion is that which Enrico Fermi gave to Freeman Dyson when the latter proudly showed the former his numerical solution to an experimental result of Fermi’s:

“There are two ways of doing calculations in theoretical physics”, he said. “One way, and this is the way I prefer, is to have a clear physical picture of the process that you are calculating. The other way is to have a precise and self-consistent mathematical formalism. You have neither.” (Dyson 2004)

When Dyson protested, Fermi asked “How many arbitrary parameters did you use for your calculations?”:

I thought for a moment about our cut-off procedures and said, “Four.” He said, “I remember my friend Johnny von Neumann used to say, with four parameters I can fit an elephant, and with five I can make him wiggle his trunk.” With that, the conversation was over. (Dyson 2004)

With the 14 arbitrary parameters Ireland used, von Neumann could doubtless make his elephant fly while copulating. Though economics is not applied physics, we need to take heed of Fermi’s advice that we need either “a clear physical picture of the process that you are calculating” or “a precise and self-consistent mathematical formalism.” Both can be constructed once we embrace the inherent complexity of the economic system, and abandon the Neoclassical fetishes of microfoundations, linearity, and equilibrium.

Rebuilding Economics from the Top Down—a work in progress

I have just commenced a six-month research project at the Budapest Centre for Long-Term Sustainability (https://bc4ls.com/), and one of my allotted tasks is to write a 30,000 word book. With apologies to my good friend Blair Fix, my working title echoes that of his blog (https://economicsfromthetopdown.com/): “Rebuilding Economics from the Top Down”.

I start with the hubris of mainstream economics prior to the Global Financial Crisis. The next instalment will discuss their failed attempt at soul-searching after the crisis, which has resulted in models that failed to anticipate the Global Financial Crisis still dominating the profession today.

I will post draft chapters as I complete them on my Patreon (https://www.patreon.com/ProfSteveKeen) and Substack (https://profstevekeen.substack.com/) pages. Please consider signing up to one or the other, so that I can continue to make my research freely available. My intention is to post at least one chapter each week for the next three months.

The Magnificent Failure of Mainstream Economics

For the last fifty years, the development of economics has been driven by the desire to derive macroeconomic analysis directly from microeconomic theory. This research program was a theoretical success and a practical failure.

Modern macroeconomic modelling, initially in the form of Real Business Cycle (RBC) and later Dynamic Stochastic General Equilibrium (DSGE) models, is firmly based on the utility and profit maximizing principles of microeconomics, and in particular, the growth model developed by Frank Ramsey in 1928 (Ramsey 1928). The success of this process of derivation, combined with the economic conditions of the last decade of the 20th century and the first half-decade of the 21st, led mainstream economists to believe that they had found the economic “Holy Grail”: a well-grounded theory of economics which also enabled economists to manage the economy successfully.

Robert Lucas, one of the key architects of the “microfoundations revolution”, gave a triumphalist perspective on the state of economics in his Presidential Address to the American Economic Association in January 2003:

Macroeconomics was born as a distinct field in the 1940’s, as a part of the intellectual response to the Great Depression. The term then referred to the body of knowledge and expertise that we hoped would prevent the recurrence of that economic disaster. My thesis in this lecture is that macroeconomics in this original sense has succeeded: Its central problem of depression prevention has been solved, for all practical purposes, and has in fact been solved for many decades… Taking U.S. performance over the past 50 years as a benchmark, the potential for welfare gains from better long-run, supply-side policies exceeds by far the potential from further improvements in short-run demand management. (Lucas 2003, p. 1)

A similar triumphalism pervaded policy circles. Ben Bernanke, speaking just 16 months before he became Chairman of the Federal Reserve, declared in October 2004 that:

the low-inflation era of the past two decades has seen not only significant improvements in economic growth and productivity but also a marked reduction in economic volatility, both in the United States and abroad, a phenomenon that has been dubbed “the Great Moderation.” Recessions have become less frequent and milder, and quarter-to-quarter volatility in output and employment has declined significantly as well. The sources of the Great Moderation remain somewhat controversial, but as I have argued elsewhere, there is evidence for the view that improved control of inflation has contributed in important measure to this welcome change in the economy. (Bernanke 2004)

Likewise, economic modellers were confident that their mathematical models of the economy could accurately predict its future behaviour. Though, as I note below, there were some elements of conflict, criticism and scepticism within mainstream academic economics, model builders, who were the dominant faction, were boastfully loud and proud. In June 2007, the authors of the most celebrated DSGE model proclaimed its capacity to out-perform mere econometric techniques, and to forecast the economy’s future path:

Using a Bayesian likelihood approach, we estimate a dynamic stochastic general equilibrium model for the US economy using seven macroeconomic time series. The model incorporates many types of real and nominal frictions and seven types of structural shocks. We show that this model is able to compete with Bayesian Vector Autoregression models in out-of-sample prediction. We investigate the relative empirical importance of the various frictions. Finally, using the estimated model, we address a number of key issues in business cycle analysis: What are the sources of business cycle fluctuations? Can the model explain the cross correlation between output and inflation? What are the effects of productivity on hours worked? What are the sources of the “Great Moderation”? (Smets and Wouters 2007, p. 586).

This confidence rubbed off on international economic bodies, with the OECD entitling the editorial to its June 2007 Economic Outlook report “Achieving further rebalancing”, and declaring that:

the current economic situation is in many ways better than what we have experienced in years. Against that background, we have stuck to the rebalancing scenario. Our central forecast remains indeed quite benign: a soft landing in the United States, a strong and sustained recovery in Europe, a solid trajectory in Japan and buoyant activity in China and India. In line with recent trends, sustained growth in OECD economies would be underpinned by strong job creation and falling unemployment. (Cotis 2007, p. 7)

“Pride”, as the proverb goes, “goeth before destruction, and an haughty spirit before a fall”. Just 2 months after the OECD foresaw “strong job creation and falling unemployment”, and Smets and Wouters crowed about the out-of-sample predictive powers of their DSGE model, less than three years after Bernanke heralded “this welcome change in the economy”, and less than five years after Lucas proclaimed that the “problem of depression prevention has been solved”, the global economy collapsed into the greatest economic crisis since the Great Depression.

As the top plot in Figure 1 shows, the rate of economic growth crashed from 4.5% per year to minus 3%, the opposite of the “buoyant activity in China and … sustained growth in OECD economies” that the OECD advised was in store for 2008 and 2009. Just as worryingly, the rise in unemployment that always occurs during a recession was accompanied this time by a phenomenon that had not been seen since The Great Depression itself: deflation. Though in the end, in response to significant government policy interventions, the period of deflation was short-lived, it nonetheless occurred: the rate of growth of consumer prices in the USA collapsed from over 5% per year in late 2007 to under minus 1% per year by mid-2009. In stark contrast to Lucas’s confidence that “welfare gains from better long-run, supply-side policies exceeds by far [his emphasis] the potential from further improvements in short-run demand management”, economists who were ill-equipped for the job found themselves desperately trying to boost short-run economic demand.

Figure 1: Growth rates from 1990-2015 for the USA & 25 other major economies

Here they also failed, in comparison to the “Keynesian” economic orthodoxy which preceded them. The hapless President Obama, whose degrees were in political science and law, took the advice of dominant mainstream economics figures—such as Larry Summers, whom he appointed Director of the USA’s National Economic Council, and Timothy Geithner, whom he appointed Secretary of the Treasury—that the way to end the crisis quickly was to pump the banking system full of excess Reserves. This, Obama was assured, would stimulate the economy more rapidly than, for example, putting money directly into the hands of households. Obama lent his oratorical skills to this mainstream economic advice, stating in a speech at Georgetown University in April 2009 that:

although there are a lot of Americans who understandably think that government money would be better spent going directly to families and businesses instead of banks—”where’s our bailout?,” they ask—the truth is that a dollar of capital in a bank can actually result in eight or ten dollars of loans to families and businesses, a multiplier effect that can ultimately lead to a faster pace of economic growth. (Obama 2009. Emphasis added)

The incredibly brief Covid recession threw into high relief just how wrong this conventional economic advice was. When Covid lockdowns forced people out of work, the government response was to give households immediate financial stimulus—and the recession was over in 2 months. The “Great Recession”, as Americans christened what the rest of the world called the “Global Financial Crisis”, lasted one and a half years, making it the second-longest recession in the USA in the past century. Only the first phase of the Great Depression, from August 1929 till March 1933, lasted longer.

Figure 2: The drawn-out recession of the Global Financial Crisis, versus the almost instantly-over Covid Recession

A theoretical and practical failure as big as this—to have models that did not show that the biggest macroeconomic event in a century was imminent, and to be so unprepared for a crisis that the policies advocated by economists made the recovery worse—did provoke some critical reflection amongst economists. But their reflections lacked any capacity to imagine any other way to build macroeconomics apart from the method that had already failed them.

References

Bernanke, Ben S. 2004. “Panel discussion: What Have We Learned Since October 1979?” In Conference on Reflections on Monetary Policy 25 Years after October 1979. St. Louis, Missouri: Federal Reserve Bank of St. Louis.

Cotis, Jean-Philippe. 2007. ‘Editorial: Achieving Further Rebalancing.’ in OECD (ed.), OECD Economic Outlook (OECD: Paris).

Lucas, Robert E., Jr. 2003. ‘Macroeconomic Priorities’, American Economic Review, 93: 1-14.

Obama, Barack. 2009. “Obama’s Remarks on the Economy.” In. New York: New York Times.

Ramsey, F. P. 1928. ‘A Mathematical Theory of Saving’, The Economic Journal, 38: 543-59.

Smets, Frank, and Rafael Wouters. 2007. ‘Shocks and Frictions in US Business Cycles: A Bayesian DSGE Approach’, American Economic Review, 97: 586-606.

 

The Failure of Neoliberalism: Backing Up Macro Alf, & Showcasing Ravel, in 11 plots and two averages

The macro commentator Alfonso Peccatiello, who writes as @MacroAlf on Twitter/X and publishes the Macro Compass newsletter, recently posted an excellent thread on private debt that cited my work:

Let me show you one of the most underrated and yet crucial long-term macro variables in the world. Debt. But not government debt: people should stop obsessing it! The government can print money in its own currency. Of course, this has limitations: capacity constraints, inflation, credibility…but there is much more vulnerable source of debt out there. Private sector debt levels and trends are by far a more important macro variable to follow.

Let me explain why. The private sector doesn’t have the luxury to print money: if you get indebted to your eyeballs and you lose your ability to generate income, the pain is real. This amazing chart from my friend @darioperkins proves the point quite eloquently…

Figure 1: Alf’s chart of private debt to GDP bubble for 4 key economies

This post follows up on Alf’s lead by producing a private debt-focused profile of all the major economies in the OECD whose debt levels are also recorded by the Bank of International Settlements. It combines data on inflation and unemployment rates from the OECD with private and government debt and house price data from the BIS.

The plots in this post run in reverse alphabetical order from the United States (see Figure 2) to Australia. Their message is the same that Alf made in his tweet stream (x-stream?): private debt matters, and the fact that conventional Neoclassical economics ignores it is a major reason why it has failed as a guide to economic theory and policy.

This post also showcases Ravel©™, a multidimensional analytic database which I have designed, and which Minsky’s programmer Dr Russell Standish has coded. We hope to release Ravel commercially in 2024. If you like what you see here, then let me know in the comments and I’ll add you to the early pre-release of Ravel, which we hope will occur in early 2024 (comments are only open to paid subscribers on Patreon and Substack). The end of the post will briefly explain how the plots were created.

Figure 2: The USA’s data

Several common themes turn up in almost all countries in these plots. First and foremost, the shift from so-called “Keynesian” to “Neoclassical/Neoliberal” economic policies that began in the mid-1970s, which was supposed to unleash the private sector from the shackles of the State, has failed on its own terms. The rate of economic growth under Neoliberalism—which I date from the beginning of 1975, when a surge in inflation empowered the political and academic rise of Milton Friedman’s “Monetarism”—has been lower for every country in this database. Rather than the Neoclassicals showing the Keynesians how to promote growth, the Neoclassicals have shown how to turn real economic performance into permanent financial speculation:

  • The rate of economic growth (top left graph) has trended down over time, and rather than reversing this trend, Neoliberalism has accentuated it.
  • The unemployment rate is higher and more volatile than in the “bad old Keynesian days”.
  • Neoliberalism might claim the low-inflation period after the GFC as a success story, but even so, post-GFC inflation is higher than the record of the 1960s.
  • The average rate of economic growth rate has been substantially lower after the deregulation fetish began than before—at 1.8% p.a. in the USA’s case, the average rate of economic growth since Neoliberalism is barely more than half the 3.25% p.a. from 1945 till 1975.
  • The main growth success of the Neoliberal period has been an unprecedented increase in private debt. In America’s case, it has tripled from under 60% of GDP at the end of WWII to a peak of almost 180% during the GFC (Global Financial Crisis).
  • Government debt is much lower than private debt, and yet it’s government debt that is the unwarranted focus of economic discussion and policy, as Alf pointed out. Nonetheless, even government debt has risen under Neoliberalism, and in the USA’s case the post-GFC level exceeds the pre-Neoliberal peak. That reducing the ratio of government debt to GDP was even made a (failed) policy target is a sign of how little Neoclassical economists know about the economy, since government debt is in fact a record of fiat money creation.
  • By stifling the growth of Fiat money, the Neoliberal period encouraged the growth of Credit—the annual change in private debt. While Neoclassical economists like Ben Bernanke bleat that “pure redistributions [as they falsely characterise new private debt] should have no significant macroeconomic effects” (Bernanke 2000, p. 24), Credit strongly negatively correlates with unemployment: when Credit is high unemployment is low and vice versa. This shows, as Alf argued, that credit is a far more important determinant of economic performance than government spending, and yet mainstream economic theory and policy continue to ignore it.
  • Government money creation tends to be driven by the rise and fall of unemployment. Since unemployment in general has risen under Neoliberalism, and economic crises that Neoclassicals thought they had abolished (“macroeconomics in this original sense has succeeded: Its central problem of depression prevention has been solved, for all practical purposes, and has in fact been solved for many decades”: Lucas 2003, p. 1) have become more extreme and frequent, panic rather than policy has driven government spending higher as a proportion of GDP under their misguided guidance.
  • House prices were tame before Neoliberalism and have been wild ever since, because the housing market has been the main destination of rising private debt. Though Neoliberal politicians like Reagan and Thatcher, Clinton and Blair, believed they were liberating the private sector in general, what they really did was let the financial sector rip.
  • The fuel beneath rising house prices has been rising household debt. This has risen even more than total private debt—in the USA’s case, household debt is now fives times as large as it was at the end of WWII, when compared to GDP.
  • Turning housing from a long-term consumption item into a financial commodity has led to house price volatility. Richard Vague’s magisterial survey of the last two centuries of financial crises (Vague 2019) shows that house price bubbles are overwhelmingly the main factor behind financial crises. This is what Neoliberalism really gave us.
  • Lastly, there is a causal link between change in the level of new mortgage debt and change in house prices. Since houses are bought primarily with borrowed money, the flow of new mortgages is the main monetary foundation of the existing price level. It follows that change in new mortgages drives change in house prices. Neoliberalism promised economic prosperity, but by its ignorance of money has turned our economies into unstable, private-debt-financed casinos.

The following plots are displayed in Ravel simply by moving the selector dot on the master Ravel in the right-hand corner of the model. Figure 3 to Figure 25 show the same data for other countries, in reverse alphabetical order from the UK to Australia.

Figure 3: UK

Figure 4: Switzerland

Figure 5: Sweden

Figure 6: Spain

Figure 7: Portugal

Figure 8: Poland

Figure 9: Norway

Figure 10: New Zealand

Figure 11: Netherlands

Figure 12: Mexico

Figure 13: Japan

Figure 14: Italy

Figure 15: Israel

Figure 16: Hungary

Figure 17: Greece

Figure 18: Germany

Figure 19: France

Figure 20: Finland

Figure 21: Denmark

Figure 22: Canada

Figure 23: Belgium

Figure 24: Austria

Figure 25: Australia

Using Ravel

The first stage in using Ravel is to import the data, in this case from the BIS and the OECD—which is what Figure 26 illustrates. An importing form specifies which columns in a database are “dimensions” and which are data. At present we only import CSV files, but the range of import formats will grow after the first release.

Figure 26: Data imported into Ravel


This imported data (stored in the parameters BISDebt, BISHPI, OECDCPI and OECDUnemp) is then, if necessary, attached to a Ravel—the square boxes in Figure 27 with arrows inside them—to enable manipulation of the data and extraction of subsets for further analysis. The top Ravel in Figure 27 stores data on government and private debt in numerous ways—raw domestic currency, percent of GDP, etc. The operations on that Ravel—collapsing one axes and extracting the maximum value stored on it, selecting a single instance (Lending by All Sectors rather than Banks alone, Adjusted for Breaks rather than unadjusted)—reduce a 7-dimensional object to a 4-dimensional one, where those dimensions are Country, Date, Sector, and Unit.

Figure 27: using Ravel’s slice, dice and aggregate functions

Figure 28 then takes the 4-dimensional object Debt and separates it into Private Debt in domestic currency, private debt as a percentage of GDP, and Government debt as a percentage of GDP.

Figure 28: Minimizing Ravels and selecting slices of debt data

Figure 29 showcases Ravel‘s analytic power, in three ways:

  • There is no easily accessible database on quarterly GDP by country, but the BIS database has raw data from which it can be derived. If you divide private debt in domestic currency by private debt as a percentage of GDP, and then multiply that by 100, you have GDP in domestic currency for all the countries in the BIS database. There are over 40 countries and close to 400 quarters of data: that would take 16,000 replicated cell formulas in Excel, but it takes just one flowchart equation in Ravel.
  • Nominal growth rate data can be derived by dividing the annual change in GDP by itself and multiplying by 100. Once again, one flowchart formula replaces close to 16,000 Excel formulas; and finally,
  • The real growth rate can be derived by subtracting the inflation rate from the nominal growth rate.

Figure 29: Deriving Quarterly nominal GDP and growth rates for the 43 countries in the BIS database

The final database is shown in Figure 30, with one country—Australia, simply because it’s the first alphabetically—highlighted.

Figure 30: The completed economic database

The importing and calculation routines are then put into groups and reduced in size to reduce clutter on the final dashboard used for this document.

References

Bernanke, Ben S. 2000. Essays on the Great Depression (Princeton University Press: Princeton).

Lucas, Robert E., Jr. 2003. ‘Macroeconomic Priorities’, American Economic Review, 93: 1-14.

Vague, Richard. 2019. A Brief History of Doom: Two Hundred Years of Financial Crises (University of Pennsylvania Press: Philadelphia).

The Paradox of Debt, by the Tycho Brahe of Credit

Very rarely do I review a book and find that the best way to convey its significance is to quote, verbatim, its first four paragraphs:

In 2020, during the darkest hours of the global coronavirus pandemic, the US government spent $3 trillion to help rescue the country’s – and, to some extent, the world’s – economy. This infusion of cash increased US government debt and thus reduced US government wealth by almost the entirety of that frighteningly large amount – the largest drop in US government wealth since the nation’s founding. Surely something this unfavorable to the government’s ‘balance sheet’ would have broad, adverse financial consequences.

So what happened to household wealth during that same year? It rose. And it improved by not just the $3 trillion injected into the economy by the government but by a whopping $14.5 trillion, the largest recorded increase in household wealth in history. As a whole, the wealth of the country – its households, businesses, and the government added together – increased by $11 trillion, so this improvement in wealth was contained largely to households.

How and why did such an extraordinary increase occur?

To understand this paradox, we need to seek answers to some of the most fundamental questions in economics: What is money? What is debt? What brings about increases in wealth? Often the most basic questions can be the most challenging to answer. They appear deceptively simple but they are complex and vitally important.” (Vague 2023, p. 1)

What follows is a magisterial analysis of the role of debt in economics, working from detailed data on each of the world’s major economies. The key focus is, as the title declares, the paradoxical role of debt in a capitalist economy. Debt is both a pre-requisite for economic growth, and a cause of economic crises as well.

“the 2008 financial crisis was no black swan or storm of the century phenomenon; instead, it would have been easy to spot, and might have been foretold years, not months, in advance, if analysts had been looking in the right direction and at the right things. Like most financial calamities, the 2008 crisis was born from the unbridled growth in private sector debt – a key trend that is straightforward to track.” (Vague 2023, p. 222)

And yet it is ignored by mainstream economics, leaving its analysis to a band of contrarians including me (Keen 1995), Hyman Minsky (Minsky 1982), whose “Financial Instability Hypothesis” inspired me, Irving Fisher (Fisher 1933), whose “Debt Deflation Theory of Great Depressions” inspired him, Michael Hudson (Hudson 2018) and the late David Graeber (Graeber 2011), who track the history of debt, Richard Werner (Werner 2016), who explains the mechanisms by which debt creates money, and now Richard Vague (Vague 2019, 2023), who covers the empirics of private and public debt in great and fascinating detail.

“A lending boom is optimism on steroids… Euphoria is the hardest habit to quit… Lending and debt are the agents and catalysts of that euphoric delusion. To seek to explain booms solely through impersonal, technical factors is to miss the fact that economics is a behavioral and not a physical science. It is to miss the essence of financial crises.” (Vague 2023, p. 188)

I often feel that the struggle against Neoclassical economics is like the struggle by believers in the Heliocentric model of the solar system against the then dominant Geocentric model of Aristotle and Ptolemy. Copernicus, Brahe, Kepler, Galileo, and finally Newton, were the pivotal fighters for truth in that struggle. I’m not putting any of us on the same pedestal as those critical contributors to the triumph of science over religion, but my mind often wanders to the personal parallels: whose contributions to realism on the monetary nature of capitalism are most akin to the astronomical contributions of those giants?

“In the lead-up to the Global Financial Crisis of 2008, the US accumulated a gargantuan mountain of new mortgage debt, totaling $5 trillion. This debt was so large it was practically impossible to miss – except that most economists did miss it entirely, and therefore failed to predict the financial crisis.” (Vague 2023, p. 58)

Fisher was our Copernicus, first putting forward the theory that “over-indebtedness to start with and deflation following soon after” are the key factor in causing Great Depressions”. Minsky was our Kepler, working out the elliptical ways in which debt drove economics. Perhaps, by inventing Minsky—which I regard as the monetary equivalent of Galileo’s telescope—I have some parallels with Galileo. But there is no doubt about the astronomical doppelganger for Richard Vague: it is Tycho Brahe.

“Still another objection is that this is just one more example of government trying to pick winners and losers, and only the wisdom of markets can do that. Well, in 2007, the market was picking no-down-payment mortgage loans as winners and how did that work out?” (Vague 2023, p. 226)

Tycho’s meticulous observations of the motions of the planets and stars provided the rock-solid foundations on which the Heliocentric model was constructed. This is the realm in which Vague excels. This book, and the website that supports it, does for the credit-based model of capitalism what Tycho’s measurements did for the Heliocentric model—a model without which we would still be an Earthbound species, rather than one with the potential to reach for the stars.

“Based on these ratios, debt is a peripheral issue to the top 10 percent of US households but a monumental issue to many in the bottom 60 percent… champion the trickle-down theory of economics are correct – except for one detail: it is debt that has been trickling down, not wealth.” (Vague 2023, pp. 70-72)

The Paradox of Debt: A new path to prosperity without crisis provides that data in an extremely accessible and entertaining form. It’s not easy to write about empirical facts embodied in tables and graphs, and make the prose engaging. Vague does it with ease.

“In the early 1980s, many economists made dire predictions about the likely consequences of high levels of government debt. The warned that it would constrain spending, crowd out lending and investment, lead to higher interest rates and inflation, and seriously encumber the country. At the time, inflation had reached 14 percent and interest rates were close to 20 percent.

Since then, government debt has exploded and so we have had ample opportunity to put these predictions to the test. As it turns out, over this time span interest rates have generally plummeted, not risen; investment has remained high, not been constrained; and household net worth has risen, not sunk.” (Vague 2023, p. 55)

I am in some ways too close to this topic myself—and too close to the author, who is not merely a friend, but also one of my favorite people—to write a detailed commentary and critique here. What I can say is that reading Vague is much easier than reading Keen, and provides insights that I don’t, through the wealth of data Richard has assembled with his research group—which he has, funnily enough, named after Tycho Brahe.

Figure 1: Global debt levels from the BIS, with the analysis done in my new program Ravel

“In particular, the growth that comes from government spending could come instead from a non-debt-based source. All it takes is for the Treasury to sell a non-interest-bearing instrument with no maturity to the Federal Reserve… Let’s call this instrument perpetual money. It may well be that a balance between debt-based money and perpetual money is a healthier and more technically sound way of managing monetary policy.” (Vague 2023: , p. 256)

Though there are many aspects of Vague’s analysis that are consistent with other contrarian theories, Richard is not beholden to any doctrines, and frequently makes observations that challenge other contrarians. For instance, he regards a trade as a negative, which contradicts an aspect of Modern Monetary Theory that I also criticise. He worries about aggregate debt—public as well as private—whereas other contrarians, me included, tend to see government debt as benign (the opposite of the Neoclassical argument). Like me, he argues for a debt jubilee, though with quite a different structure to my “Modern Debt Jubilee”.

The ratio of debt to income in economies almost always rises, with profound consequences, both good and bad.

• Money is itself created by debt.

• New money, and therefore new debt, is required for economic growth.

• Rising total debt brings an increase in household and national wealth or capital. Most wealth is only possible if other people or entities have debt. As wealth grows, so too must debt.

• At the same time, debt growth brings greater inequality, in part because middle- to lower-income households carry a disproportionate relative share of household debt burden. In fact, in economic systems based on debt – which is the world as it operates now – rising inequality is inevitable, absent some significant countervailing change such as a major change in a nation’s tax policy.

• A current account and trade deficit contributes to private sector debt burdens.

• The overall increase in debt, especially private debt, eventually slows economic growth and can bring economic calamity. (Vague 2023, p. 7)

This makes his case worthy of attention from other contrarians, as well as politicians, the general public, and the investment community. He even spices up the book with predictions, based on his careful attention to the data, which financial analysts may find both surprising and potentially rewarding.

“Any among these turns of events would likely force Germany to make some stark economic choices. Surely it would – and, given the outlook for China’s GDP growth, I’m tempted to say it will – suffer a contraction, unless the government quickly encourages large increases in private sector spending, funded by increased business and household debt, or, alternatively, enacts large increases in government spending.” (Vague 2023, p. 115)

I strongly recommend this book (the sale proceeds from which go to charity) to my own readers, and if you haven’t heard of Richard Vague before, read this excellent and entertaining profile, from the days prior to 2020 when he was considering running for President: Richard Vague May Be the Most Revolutionary Thinker in Philly (phillymag.com).

And yes, in case you’re wondering, we are considering writing a book together. The extent to which our approaches to economics complement each other is matched only by the novelty of our contradictory names.

Click here to purchase The Paradox Of Debt

Fisher, Irving. 1933. ‘The Debt-Deflation Theory of Great Depressions’, Econometrica, 1: 337-57.

Graeber, David. 2011. Debt: The First 5,000 Years (Melville House: New York).

Hudson, Michael. 2018. …and forgive them their debts: Lending, Foreclosure and Redemption From Bronze Age Finance to the Jubilee Year (Islet: New York).

Keen, Steve. 1995. ‘Finance and Economic Breakdown: Modeling Minsky’s ‘Financial Instability Hypothesis.”, Journal of Post Keynesian Economics, 17: 607-35.

Minsky, Hyman P. 1982. Can “it” happen again? : essays on instability and finance (M.E. Sharpe: Armonk, N.Y.).

Vague, Richard. 2019. A Brief History of Doom: Two Hundred Years of Financial Crises (University of Pennsylvania Press: Philadelphia).

———. 2023. The Paradox of Debt: A new path to prosperity without crisis (Forum: London).

Werner, Richard A. 2016. ‘A lost century in economics: Three theories of banking and the conclusive evidence’, International Review of Financial Analysis, 46: 361-79.