Steady, steady…

This post aims to clarify the difference between three concepts that most people doing computational work in economics, notably macro, come across sooner or later. These three concepts are the deterministic steady-state (DSS), the stochastic steady-state (SSS), which also goes under the name ergodic mean in the absence of shocks (EMAS), and the ergodic mean with shocks (EMWS). Seemingly similar, there are in fact important conceptual and practical differences between them. The piece is motivated by the observation that this nexus of concepts creates a fair amount of confusion, at least judging by questions in online forums, yet I have not come across a unified summary.[1] The post addresses the definition of the three concepts; how the DSS differs from the SSS; why the SSS is sometimes referred to as EMAS; and why all of this matters. In the interest of conveying the key ideas without too much notational baggage, I will at times abuse mathematical notation.

To fix ideas, consider a dynamic and stochastic (discrete-time) system made up of just one endogenous variable, y, that is subject to the exogenous shock \epsilon; the ideas extend to higher-dimensional systems. Write the policy function for y_t defining optimal decisions given state y_{t-1} and shock \epsilon_t as y_t=g(y_{t-1},\epsilon_t); I assume this to be known (i.e., the model is solved and we ignore approximation error). To complete the notational setup, denote the past history of shocks by \Omega_{\epsilon,t} \equiv \{...,\epsilon_{t-2},\epsilon_{t-1}\} and future realizations of shocks by \Omega_{\epsilon,t}^{f} \equiv \{\epsilon_{t+1},\epsilon_{t+2},...\}.

The deterministic steady-state of a system refers to the fixed point of that system provided all stochastic elements are removed forever. In other words, it is the state reached in the absence of shocks and expecting no future risk. Thus, the deterministic steady-state \bar{y}^{\text{DSS}} satisfies (with some misuse of notation)
g(y_t,\epsilon_t=0|{\Omega}_{\epsilon,t}^{f}=\{0,...\})-g(y_{t+1},\epsilon_{t+1}=0|{\Omega}_{\epsilon,t+1}^{f}=\{0,...\})=0 \quad \forall t,
and we can write it as
\bar{y}^{\text{DSS}}=g(\bar{y},0|{\Omega}_{\epsilon}^{f}=\{0,...\}).
The DSS is, in principle, straightforward to compute: we set shocks to zero, “drop the time subscripts” and solve the resulting system of equations for the fixed point either analytically or computationally using, say, a Newton-type algorithm.

The stochastic steady-state (SSS) of a system, on the other hand, is that point in the state-space where agents would choose to remain if there are no shocks in that period but possibly in the future. That is, the stochastic steady-state satisfies
g(y_t,\epsilon_t=0)-g(y_{t+1},\epsilon_{t+1}=0)=0 \quad \forall t,
and, hence,
\bar{y}^{\text{SSS}}=g(\bar{y},0).

Finally, the ergodic mean with shocks (or simply, ergodic mean) corresponds to the theoretical mean of the process when shocks evolve normally:
\bar{y}^{\text{EMWS}} = \mathbb{E}[y_{t}].
In the simplest terms — any statistician reading this, please forgive me for not delving in detail into the relationship between time averages and ensemble averages — we can think of the ergodic mean as the average value of y over a long period of time. More precisely, because the systems we typically work with satisfy certain conditions (stationarity and ergodicity), theoretical moments such as the mean are constant and can be obtained by computing the time average from a sufficiently long sample. The basic idea is that if we simulate a long chain, then we will have visited all parts of the state-space (with frequencies corresponding to the stochastic properties of the system) and can compute a time average that is informative about the expected value of the process.

The figures below illustrate the three concepts for a somewhat randomly chosen model I just had at hand. The grey line shows the simulated model with shocks; the grey line with bullets gives the EMWS. The blue line is the DSS and the orange line is the EMAS. Evidently, all three means differ.

EMvsSS_InclSeries._lgjpg.JPG

What is the intuition for these differences? It is maybe not that difficult to imagine that the EMWS is different from the other two: To give but one example, if the model is highly non-linear and, for instance, good times are better than bad times are worse, then taking the average over the shock realizations might, for instance, leave you with an expected value that is above the DSS. But why is the deterministic steady-state different from the stochastic steady-state? The exact answer will differ from model to model, but in general, the answer is that if there is uncertainty about the future, agents may exhibit precautionary behavior that is not present if there is no risk at all. The magnitude of this “uncertainty correction” will be affected both by the amount of risk embedded in the model at hand and the extent to which agents respond to any given amount of risk (i.e., curvature).[2] Importantly, differences between DSS, EMWS and EMAS will appear only when using non-linear methods (e.g., higher-order perturbations, non-linear projections, etc.): if we approximate a solution using linearization, this approximation will exhibit “certainty equivalence,” which for our purposes can be taken to mean that the terms of the approximated policy-rule (and, hence, the steady-state(s)) are unaffected by uncertainty.

To make the nature of this “uncertainty correction” easier to grasp, imagine you have a certain income each month that you can use in three different ways: movies, housing, and savings — the latter doesn’t earn any return [3], but you can use accumulated savings to keep watching movies and having a roof over your head if you lose your job, say. You really enjoy going to the cinema but it would be horrific if you spent so much money on movies that you are no longer able to pay the rent. Perhaps you should not squander the money on the cinema but instead put it in a savings account? Presumably the optimal thing to do depends on how big the risk is of not being able to come up with the rent payment. Suppose, for instance, that you know for sure that each month, you have 1000$ available — it’s always been that way and it will always be the case. Then you can calculate exactly how much is taken up by rent and how much you can spend on the cinema each period. This corresponds to the DSS. Now imagine, though, that although historically you reliably received your income, you are worried that the economy might go downhill and you could lose your job. As a result, you might think to yourself that you better save some money for rainy days instead of consuming. If we then looked at your behavior of the next couple of months, even if no income risk actually materializes, we will find that you spent less money on the cinema than you would have in the absence of future risk.

But why is the SSS sometimes referred to as the “ergodic mean in the absence of shocks”? Because we can think of it also as average value in a long sample when shock realizations are zero yet agents take into account the possibility of shocks occurring. That is [4],
\mathbb{E}[y_t|\epsilon_t=0].
This way of thinking about the SSS is also informative about the method by which we can find the SSS. Unlike for the DSS, we cannot simply ignore randomness. Fortunately, though, we can compute the SSS using simulation-based methods — just as we would do for the EMWS. First, iterate on y_{t+1}=g(y_t,\epsilon_t=0) \quad T times, where T is large, starting at \bar{y}^{\text{DSS}}. Note that all shock realizations are zero, but each period, agents do not know that this will be the case going forward. Given the resulting sample \{y_s\}_{s=1}^{T}, we approximate \hat{y}^{\text{SSS}} = y_{\text{B}+1}, where B is the number of burn-in periods needed for the process to converge from the DSS to the SSS. Qua steady-state, y_{\text{B}+1}=y_{\text{B}+2} and we can equivalently say that \hat{y}^{\text{SSS}} = \frac{1}{T-(B+1)} \sum_{s=B+1}^T y_s. [5]
Do these differences really matter? In fact, carefully distinguishing between the three different concepts is important in a number of applications. Two examples are characterizing the moments of model-generated data or when computing impulse response functions. For instance, suppose the goal is to characterize the average level of consumption in an economy subject to risk. Then the DSS will give a misleading answer, whereas the EMWS and the EMAS will take into account agents’ reaction to risk. [6] Second, when non-linear methods are used to solve a model, impulse responses will depend on both \Omega_{\epsilon} and \Omega_{\epsilon}^f. One way to deal with this is to look at a “representative” IRFs at the ergodic mean in the sense that future shock realizations are averaged out. Thus, the “generalized” impulse response (GIR) of y after a shock \epsilon_t is given by
GIR_l(\epsilon_t,\Omega^{\epsilon}_{t-1}) = \mathbb{E}_t[y_{t+l}|\epsilon_t,\Omega^{\epsilon}_{t-1}] - \mathbb{E}_t[y_{t+l}|\epsilon_t = 0,\Omega^{\epsilon}_{t-1}].

Here, too, care needs to be taken in choosing the right concepts: for sake of consistency, when computing such a GIR using simulations, we should start at the EMWS rather than the DSS (or EMAS); if, instead, we started from the DSS, say, then the IRFs would confound the transition from DSS to the ergodic mean with the true impulse response. In addition, when impulse responses are shown in proportional deviations from “the steady-state,” the relevant concept with which to compare the absolute deviations is the EMWS.

 

Postscript: Implementation in Dynare

One of a number of ways users of Dynare can obtain these three objects is, schematically, as follows.

  1. If you are just interested in the deterministic steady-state and/or the ergodic mean:
    i.) Simulate and solve (computing empirical rather than theoretical moments):
    stoch_simul(order=3, periods = 2096, drop = 2000, pruning, k_order_solver, noprint, irf=0); 
    % simulated endogenous variables in  oo_.endo_simul and exogenous variables in  oo_.exo_simul
    ii.) Deterministic steady_state:
    vSSS = oo_.steady_state;
    iii.) Ergodic mean with shocks:
    vEMWS = mean(oo_.endo_simul,2);
  2. If you want the stochastic steady-state:
    i.) Simulate and solve:
    stoch_simul(order=3,pruning,k_order_solver,noprint,irf=0);
    ii.) Specify:
    Periods = 96;
    SimulBurn = 2000;
    iii.) Generate a matrix (or vector) of zero shocks:
    mEpsZero = zeros( SimulBurn+Periods,M_.exo_nbr);
    iv) Simulate with shocks set to zero:
    mSimulNoShocks = simult_(oo_.dr.ys,oo_.dr,mEpsZero,options_.order)’;
    v.) EMAS is any of the final points after burn-in
    vSSS=mSimulNoShocks(1+SimulBurn,:);

NB:
– Notation: values to be chosen by the user are in italics. The “v” prefix stands for “vector” while “m” stands for “matrix.” I employ the CamelCase convention.
– Of course, the concepts are valid beyond Dynare or perturbation methods more generally! For instance, the figure above is based on a global projection algorithm.
– This is largely based on J. Pfeifer’s replication of Basu and Bundick (2017).

Footnotes

[1] Among others, I draw on Koop et al. (1996), Coeurdacier et al. (2011), Fernandez-Villaverde et al. (2011), Born and Pfeifer (2014) and Andreasen et al. (2018), but above all the comments of the unwearying Johannes Pfeifer in the Dynare forum.

[2] To be clear, the EMWS likewise accounts for uncertainty, indeed, it does so in a more comprehensive way (see Lan and Meyer-Gohde, 2014); however, it also accounts for the implications of realized volatility.

[3] I am abstracting here from questions of time preference.

[4] It is tempting to also condition on {\Omega}_{\epsilon,t}=\{...,0\} for both EMAS and DSS, tightening the link between theoretical concept and the way it is approximated through simulation, but conceptually this seems unnecessary.

[5] For the EMWS, we would likewise gather a sample by iterating on y_{t+1}=g(y_t,\epsilon_t) and then taking a time average, but here, of course, y_{\text{B}+1} \neq y_{\text{B}+2}.

[6] Which one of these ergodic means is appropriate will depend on the question we wish to answer.

Advertisements

The Productivity Paradox – A Survey

I recently contributed a survey article on the “productivity paradox” to the independent think tank Dezernat Zukunft. You can find the piece on their website.

 

 

Gender Equality and STEM Gaps

This blog post veers further away from (macro-) economics than my usual writing and I am anything but an expert on the topic. Thoughts and feedback are, hence, all the more welcome!

A friend (and chemistry teacher by trait) recently sent me the chart reproduced below, which is taken from a study authored by Professor Gijsbert Stoet and Professor David C. Geary and published in the journal Psychological Science earlier this year. The chart captures what the authors coin the educational-gender-equality paradox: “The More Gender Equality, the Fewer Women in STEM”, as the Atlantic puts it (STEM: Science, Technology, Engineering, Mathematics). To elaborate, countries with high levels of gender equality tend to have larger STEM gaps insofar as the proportion of women among STEM graduates is lower, on average. For example, Finland ranks second-highest (surpassed only by Iceland) according to the GGGI 2016 published by the World Economic Forum (for the 2017 version, see here), yet it also has one of the world’s largest gender gaps in college degrees in STEM fields: fewer than 25% of STEM graduates are women. Similar remarks apply to two other countries commonly seen to be at the vanguard of gender equality, Norway and Sweden. On the opposite end of the spectrum, in Algeria 41% of college STEM graduates are female, all the while it ranks 119th in the 2016 GGGI ranking. This may be called a paradox insofar as, according to this study, women are less likely to choose male-dominated math and science professions in countries that empower women. At the very least, the results went against my prior and the intuition-based expectation of every single person I talked to about this study. In this blog post, I explore a few potential explanations and interpretations of the correlation shown in the chart. [1] This is no rigorous analysis (e.g. testing for causality), but more of a brainstorming session combining ideas mentioned in the paper with points raised by friends and my own reflections. The reader may nevertheless find it interesting.

Image.png

Figure 3b in Stoet and Geary (2018).

Before we get to attempts at explanation, let’s take a brief look at the data first. For each of the 144 countries covered, the Global Gender Gap Index (GGGI) assesses four overall areas of inequality between females and males – economic participation and opportunity, educational attainment, political empowerment, health and survival – as measured by 14 key indicators (e.g. life expectancy, seats in parliament, earnings). The index is represented on a 0-1 scale, with 1.0 representing (either) complete parity (or males falling behind). It is worth noting that the index focuses on gaps rather than levels, thus facilitating a better cross-country comparison: it “assesses countries on how well they are dividing their resources and opportunities among their male and female populations, regardless of the overall levels of these resources and opportunities” (2008 Report, p. 24). As I briefly remark upon in footnote [3], the index is inevitably flawed, insofar as the indicators will not capture all dimensions of gender equality we may deem important. Meanwhile, the STEM graduation rates come from UNESCO graduation data labeled  “distribution of tertiary graduates.”

There are at least four potential explanations of – or factors relating to – the educational-gender-equality paradox we may consider. These come, respectively, under the headings “life quality pressures”, “comparative advantage”, “confounding factors”, and “data issues”. A first explanation emphasises “distal” (broad contextual) influences on females’ choices. In particular, women in less gender-equal countries may also face greater economic and general life risk which may make relatively high-paying STEM occupations more attractive relative to contexts with greater opportunities and lower risks. As a friend (and STEM PhD) put it: In rich countries like Finland and Norway, “women are more likely to have the luxury to choose any subject they please (regardless of employment outcome) … if you are a women getting third level education in a less wealthy country there’s probably more pressure to get a better paid job afterwards and they might see those opportunities in STEM.” In this view, greater gender equality is associated with greater freedom – and that includes the freedom to choose fields other than STEM. One observation from the study that is consistent with this hypothesis is as follows: even when girls’ ability in science in school equalled or excelled that of boys (on which more below), they nevertheless tended to register a lower interest in science subjects; according to the study, this is true in 76% of countries.  Of course, this raises the broader question where such differences in interest might come from and why, remarkably, the “interest gap” is greater in more gender-equal countries.

The second explanation is connected to the economic concept of comparative advantage: In two thirds of countries, girls performed similarly to or better than boys in science subjects in school. However, they often performed even better in reading, such that their comparative (rather than absolute) advantage may not lie in science. To add some figures to the picture: Across all countries, 24% of girls had science as their best subject, 25% of girls’ strength was math, and 51% excelled in reading. For boys, the percentages were 38 for science, 42 for math, and 20 for reading. Thus, an Atlantic article quotes Professor Janet Shibley Hyde: “some would say that the gender STEM gap occurs not because girls can’t do science, but because they have other alternatives, based on their strengths in verbal skills.” While these observations by themselves do not suffice to explain the educational-gender-equality paradox, “the magnitude of these sex differences in personal academic strengths and weaknesses was strongly related to national gender equality, with larger differences in more gender-equal nations,” Stoet and Geary find. As a result, such “proximal” factors as personal academic strengths may, at least in part, underpin the negative correlation between gender equality and the STEM gap.

The preceding explanation, centered around the idea of comparative advantage and individual rational choice, leaves a number of questions open. For instance, why is it that in more gender-equal countries, girls have a greater comparative advantage in non-STEM fields than in less gender-equal ones, on average? I find it difficult to think of a causal mechanism leading from greater gender equality to more pronounced relative advantages of women in non-STEM fields. Moreover, even if comparative advantage is part of the story, the data suggest that the number of girls whose relative advantage lies in science or math exceeds the number of women choosing STEM degrees also in more gender-equal countries, meaning that even in those countries women whose best subjects are math and science may be nudged away from careers in those professions.

It is possible that the answer to the puzzles or open questions raised above is that there may be confounding factors at work which, at a minimum, caution against reading causality into the observed correlation, or that there are data problems. The “life quality pressures” narrative could be read as saying that greater gender equality implies greater freedom of choice and that this, given measured differences in interest, leads to a greater STEM gap. One alternative explanation is that wealth and gender equality are correlated and that it is greater overall wealth, and not greater gender equality, that drives greater freedom of occupational choice. Indeed, having taken a brief look at (OECD) data, although the GGGI focuses on gaps in access rather than levels of resources, it turns out that the GGGI index is generally positively correlated with GDP per capita (correlation index across total sample: +0.23; +0.34 in the OECD sub-sample). Relatedly, cases like the Scandinavian countries cited at the outset illustrate another potential confounding factor: the level of social security provided in more gender-equal countries is typically higher (e.g. according to the OECD social spending measure; correlation index with GGGI in OECD sub-sample: +0.38). This reduces the “economic and general life risks” I cited above as a potential factor driving women to choose financially rewarding STEM careers in less gender-equal countries. Third, country-specific factors combined with path dependency may explain some of the patterns we observe. For instance, in the Scandinavian countries, the early entry of women in the labor market was crucial in creating a social democratic service state which nurtured a constituency of socio-cultural professionals – and which today offers relatively well-paid jobs with flexible working arrangements that may make it less attractive for women to choose STEM subjects than in countries without a comparable development path.

Lastly, and circling back to where this blog post began, the paradox may at least in part be an artefact of the data. For instance, some careers that are not STEM by definition and which have a majority of women nevertheless often do require STEM skills (e.g. medicine). To the extent that women with a comparative advantage and interest in STEM choose such occupations, the graduation rates used in the study may overstate the STEM gap in terms of people’s actual engagement with STEM subjects (though this point, by itself, does not address the role of gender equality).

On the whole, these reflections leave me less surprised about the pattern documented by Professors Stoet and Geary than I was at the outset, but a number of questions remain. Let me raise a few more to close. I am generally an optimist and believe that notwithstanding political setbacks, societies around the world will achieve greater gender equality, albeit at a speed that is inevitably too slow. However, it is worth noting that if – and that’s a big ‘if’ – the cross-sectional results established in this study also hold in a dynamic setting, i.e. if women’s STEM participation does not rise as we make progress towards greater gender equality (as measured by the GGGI [2]), this also means, for instance, that the technologies shaping our world and everyday life will likely continue to bear the imprint of male bias. As is by now frequently written about, the technologies we use, including such present-and-future-shaping things as algorithms or AI technology, are not normatively passive or value-neutral. [3] Potential normative ramifications of the pattern documented by Stoet and Geary such as this tend to make me believe that even if the data reflect women having greater freedom of choice in liberal societies, we need to continue to think hard about how to make STEM fields more attractive for women.

Footnotes
[1] For readers with Bayesian inclinations, having realised that my prior needs updating, I am trying to figure out in what ways/directions I should update.
[2] Of course, at least to the extent that such a lack of parity in occupational choices manifests itself in the perpetuation of biases and discrimination, as contemplated below, this itself would represent a failure to achieve feminist goals and in that sense gender equality would not be realised.
[3] I like the example the software engineer Liz Rush gives to illustrate how people’s beliefs impact the design of even seemingly innocent technologies. In the design of air conditioning and heating systems, the developers had to take into account variables such as what feels comfortable etc. Amongst others, the designers used the resting metabolic heart rate of a middle-aged white man – but since body’s resting metabolic heart rate varies from person to person, this means that office temperature optimization was designed around an (implicit) bias that men’s comfort in the office is what you should (or at least de facto do) use as the baseline standard.

6 Rules for Economics Writing

Paul Romer kindly unscrambled an exchange on Twitter from New Year’s Eve, the topic being “clear and precise scientific communication” and, in particular, the use of both math and words in economics writing. I found it helpful to order my thoughts on this question in similarly synthesised form and to put them into context. Here’s the result.

Math is often supremely helpful. It can aid economists (and others) in clarifying and expressing their thoughts succinctly. Furthermore, it prevents unwarranted jumps in reasoning. [1] However, using math well is difficult. Three potential problems are as follows.

  1. Math may be decorative in the sense of making the model and argument harder to understand yet without adding the essence of the model and its implications.
    One of several reasons for using such decorative math may e.g. be competition on the academic job market (where technical skills may help stand out). However, this practice can make it more difficult for the researcher themselves to gain insights into the critical mechanisms at work. And it makes it hard for the audience to follow the reasoning presented.
  2. Math may encourage “lazy thinking”.
    As Nick Rowe pointed out on Twitter, Joan Robinson went as far as stating: ”I never learned math, so I had to think.” We don’t need to go as far as Robinson (I certainly wouldn’t) to acknowledge that near-automatic reliance on math might come in the way of what ought to be prior steps. [2] Thus, Paul Krugman writes that “many misunderstandings in economics come about because people don’t have in their minds any intuitive notion of what it is they’re supposed to be modeling.”[3]
  3. Math may permit or encourage the creation and use of “chameleon models” where math and words (and/or conclusions) are divorced.
    Paul Pfleiderer defines a chameleon model as “built on assumptions with dubious connections to the real world but nevertheless has conclusions that are uncritically (or not critically enough) applied to understanding our economy.” As Dietz Vollrath notes, this is probably what Paul Romer’s critique of “mathiness” most directly refers to: “[M}athiness lets academic politics masquerade as science. Like mathematical theory, mathiness uses a mixture of words and symbols, but instead of making tight links, it leaves ample room for slippage between statements in natural versus formal language and between statements with theoretical as opposed to empirical content.” (See also Brad De Long on this). To a not insignificant extent, such a gap between words and conclusions is so problematic because economics, on the one hand, and politics and policy, on the other hand, are so closely intertwined: Economics is necessary for good statecraft and will inevitably used for political ends; conversely, politics affects the economy and, no less important, politics shapes researchers worldviews. As noted in Paul Romer’s blog piece, I struggle with calling this a problem of “intent“, as it doesn’t chime with personal impression: The small sample of economists I’ve interacted with generally do seem to have ‘good intentions’ (i.e. to produce research developing policies that promote public welfare), suggesting that factors other than intentions are at work. The idea of a “problem of attention” seems to me more apposite. It captures the notion that all of us carry political ideas with us that may influence our modelling and our interpretation of models unless we pay close attention and exercise self-reflection. As an aside, rather fittingly Harold Evans says about his book “Do I Make Myself Clear”, which Paul Romer commended in the conversation, that it is “concerned with how words confuse and mislead, with or without malice aforethought …” (my emphasis.)

To minimize the incidence of problems such as those discussed above, on Twitter I suggested that we might need a variant of Orwell’s “6 Rules for Writing” for Economics. A (revised) version of these rules could be as follows:

  1. Never use mathematical symbols where it does not add to clarity and precision relative to natural language.
  2. Never use math unless you intend to use it for sake of clarity and precision.
  3. Never use mathematical tools before you understand what exactly you want to model and how math can help in doing so.
  4. Never use a mathematical model to provide policy recommendations unless you understand what assumptions underpin that recommendation.
  5. Never use complex, long, and opaque formulations where simple ones will do — in both mathematical and natural language.
  6. Break any of these rules sooner than say or model anything barbarous.
As Paul Romer noted, in parallel to the rules for the writer, there are also rules for the reader — perhaps analogous to Dani Rodrik’s commandments for economists and non-economists, which triggered the Twitter conversation. Romer suggests two examples:
“1. When an assertion is vague, ask for specifics.
2. When an assertion is opaque, assume that it is the author’s fault, not yours, and ask for clarification.”

 

Postscript: It is perhaps legitimate to ask what qualifies me specifically to comment on this topic (seeing as I haven’t even published a paper yet). One answer to this question is: nothing; but it’s a valuable exercise for me to order my thoughts as well as clarifying what I should aspire to in my research and writing. A second, more tentative answer is that, in some contexts, fresh eyes may offer a useful, complementary perspective to more experienced voices, e.g. because they haven’t grown to take some things for granted.

 

Footnotes:
[1] Because of these valuable properties, I am not of the view that e.g. economics graduate programs teach too much math. In some research areas, we might need actually need more (difficult) math to obtain better results.  For instance, solving heterogeneous agent models tends to require more (and more difficult) math and computational methods than needed for representative agent models, and yet their development is highly desirable.
[2] Writing my own thesis I sometimes notice the temptation to just ‘do the math’ (or write the code etc.) rather than first thinking about what I actually wanted to use the math for. Just doing the math is easier — and it’s lazy.
[3] For instance, Joan and Richard Sweeney’s story about the birth of recessions as an analogy to babysitting coops — brought to renewed prominence by Paul Krugman — is an excellent exmaple of a math-free “model” that helps uncover basic mechanisms that operative in the economy. It’s a lot more difficult to understand the notion of an economy-wide shortfall in demand if you go straight to a DSGE model, say. This notion of math encouraging lazy thinking might be closely intertwined with the reliance on other conventions in the profession. Such standards may often be crucial for mutual understanding, but taking them for granted may create problems. One example I’m thinking of is the focus on GDP in the analysis of the business cycle, and defining the latter as capturing co-movement phenomena with a periodicity of between 6 and 32 quarter, even thought this might not be the most fruitful approach.

On Maps and Economics (3): Are we all supremely rational (and identical)?!

Having discussed the purpose of economic models in the first post in this series, in the previous post, I looked at the first of three exemplary assumptions employed by (macro-) economists — that households are infinitely lived. My goal is to show why, despite appearing to be outlandish, such assumptions are used and, indeed, can be useful.

A second assumption that has come under heavy scrutiny is to presume that agents have rational expectations. Intuitively, that amounts to saying that agents “know the model” and form expectations about the future that are correct on average over time, i.e. their expectations aren’t systematically biased. One justification for this assumption — surprise — is that it makes the models easier to solve. A second justification is that certain alternatives have very uncomfortable implications. For instance, imagine we’d assume instead that your expectation for next year’s level of inflation (i.e. how much prices go up) is always the value inflation has today — so-called ‘purely adaptive expectations’. Now suppose that the central bank in your country tells you it’s going to do everything it take to raise inflation next year; in fact, everything around you points towards a rapid rise in inflation [1 ]. If we assume adaptive expectations, we basically have to suppose that agents completely disregard all these signals — and that’s not a very good assumption either! Fortunately there’s a large number of alternatives to strict rational expectations that have and are being studied by economists. We might assume that some people, or people sometimes, have rational expectations but at other times are ‘backward-looking’ — in fact, many central banks employ models with a structure of this sort. Or we can explicitly model learning (here’s a good interview with one of the leading researchers on the topic) or introduce limitations as suggested by behavioural economics. Doing so typically comes at the cost of greater complexity — which makes the model difficult to solve — or introducing greater simplicity elsewhere. In all such cases, rational expectations represents a helpful benchmark with which we may compare these alternatives. Moreover, if we’re not really interested in questions of information-processing, then assuming rational expectations may be a neat shortcut that permits us to introduce greater complexity in other aspects of the economy.

Finally, and perhaps most bizarrely — at least judging on my own intuition and a highly unrepresentative survey among fellow grad students at Oxford — macroeconomists frequently assume a ‘representative agent‘. That is to say, we typically assume that there are loads and loads of agents in the economy, but that they’re all identical, e.g. in terms of what they like (‘preferences’), how wealthy they are or how smart they are. Put differently, we set aside heterogeneity. We also speak of a representative agent when economic actors do in fact differ, but in such a way that if we add up their choices, the result is mathematically equivalent to the choice taken by one individual (or many identical individuals). Thus, in 2003 perhaps the most influential living macroeconomist, Robert Lucas, pithily stated: “For determining the behavior of aggregates [Krusell and Smith (1998)] realistically modeled household heterogeneity just does not matter very much. For individual behavior and welfare, of course, heterogeneity is everything.” If that sounds confusing, imagine you and your nine friends together buy ten apples regardless of how we distribute money amongst you, then we may as well suppose that you’re one agent as long as we’re concerned only with the aggregate consumption of apples. This example might illustrate why this assumption can in fact be justifiable, at least for particular research questions and particular situations. My personal view, though, is that for a large number of issues that economists ought to confront — note how my normative view on what questions are important comes in — the assumption of a representative agent is not useful. For one, I think in the roughly one and a half decade since Lucas made the above quoted statement, economists have realised that heterogeneity does matter for aggregate outcomes. To give two intuitive examples, people with a lot of debt may respond differently to a change in fiscal or monetary policy (the former is in the realm of e.g. the US Treasury whereas the latter is decided by the US Federal Reserve); and automation may affect someone with a college education differently than someone without comparably high “human capital”, as economists would call it. Secondly, I have doubts whether we can use aggregate outcomes, i.e. the level of analysis for which Lucas says realistically modeling heterogeneity does not matter much, In other words, I think inequality matters, and matters a lot, for whether some policy outcome is good or bad. Unfortunately, saying that it’d be nice to include heterogenous agents isn’t enough — doing so requires a lot of brain power and at least as much computational power; and the latter we didn’t have until recently. Fortunately, economists have and are developing “heterogenous agent models” — most people would probably say “non-crazy models” — where the individuals or households we model, or indeed firms and other actors, differ along various dimensions (here’s a good example and e.g. at the Uni of Bonn researchers are setting up a new group that is entirely devoted to the “Macroeconomics of Inequality”). In other words, even though I’ve argued that in some cases assuming “representative agents” can make sense, I also believe that we’re making progress towards being capable of analysing complex models in which we can include such things as me liking chocolate better than you [2].

In summary, in this post and the preceding one I’ve given three examples of assumptions that macroeconomists make when they build particular models (‘maps’ of the economy, in terms of Borges’ short story discussed in the first post) — assumptions that probably strike the reader as crazy [3]. I’ve tried to provide a rationale for why such modelling decisions have been made and why they can be useful and helpful for particular purposes. I’ve flagged that economists have built models that feature alternative assumptions, but that doing so may come at a cost (e.g. in terms of greater complexity). Finally, in the last paragraph, I’ve argued that including heterogeneity instead of representative agents in macroeconomic models — an area I’m very interested in, admittedly — may be valuable in order to answer a number of important questions. My hope is that this simple discussion helped convey to non-economists #WhatEconomistsReallyDo.

[1] Implicitly, I’m assuming that you have some kind of economic model in your head. But in fact that’s not a particularly strong assumption to make. Suppose there are 100 apples in the economy and 100 people who own £1 each. Now give each person another £1 but change nothing else. I bet you’d predict that the price of apples will go up, wouldn’t you? Well, that’s because you have a simple economic model in your mind, one in which more money chasing the same amount of goods drives up prices.
[2] Another dimension of heterogeneity is on the firm side. For instance, economists try to better understand why companies like Facebook or Google or Volkswagen are very successful while others are much less so, and what this implies for the macro-economy.
[3] No doubt, there are many other such assumptions, e.g. that agents maximise utility (or profits), that the economy is stable unless it’s hit by shocks, that labour productivity  is “innate and transferable”, whereas many organisations invest into their employees’ organisation-specific skills.

On Maps and Economics (2): (In economic models) you can escape death

Pick up any random article in a macroeconomic journal and you have a good shot that the section setting out the model will begin with something like this (I’ll clarify what the terms exactly mean below): “We assume an infinitely lived [assumption 1] representative [assumption 2] household…”. Having set out the model, the authors (or author) will probably go on to solve the model by assuming that the agents in the economy have rational expectations [assumption 3]. This is a caricature but perhaps not an altogether inaccurate one.  Having discussed the purpose of economic models in the preceding entry, this post and the next are aimed at giving an idea of why the outlandish seeming assumptions employed by (macro-) economists, as illustrated by the three assumptions flagged above, may actually be useful, at least as a starting point [1]. Or to phrase this in technical language: Are macroeconomists completely ludicrous, as the meme suggests? Surprise: I don’t think so.

Let’s take a look at the first assumption then: the use of infinitely lived households as the main agents of our model. At face value, that seems to contradict a fairly basic premise of human existence, i.e. that we’ll all die at some point. One motivation for this assumption is consistent with what a critical reader may have expected: mathematical convenience. For a number of reasons, it turns out that making this assumption allows simplifying the mathematical structure of an economic model. However, there is a rather more philosophical yet simultaneously pragmatic defence of this assumption as well: Voluntary transfers between parents and children (e.g. due to what’s called an “altruistic bequest motive”) can imply that a representative family behaves as if it is an infinitely-lived individual (that’s due to a famous paper by Robert Barro). Think about it this way: (i) if (!) what we’re interested in is e.g. how much the entire population of a country spends on goods and services; (ii) if assuming infinitely lived households is simpler than modelling how people die and are given birth to; and (iii) if at the aggregate level there isn’t much difference between making this assumption and modelling a more complex structure involving birth and death, then making this outlandishly seeming assumption might not be such a bad idea when it comes to creating a ‘map’ of the macro-economy. Now, e.g. (iii) may only be a rough approximation, but not only might this be better than having none, but it also provides a tractable starting point. At least as importantly, for many questions (cf. (i)) we actually do need to explicitly model generational change — think questions involving demographic change or asking which generation will have to bear the costs of climate change/high debt.

In fact, there is a class of models that’s part of the mainstream which is suitable to address such questions and does not make the assumption of infinitely lived agents, so-called ‘Overlapping Generation Models’. They bear this name precisely because they incorporate e.g. two generations living at the same time. As such, they allow us to study e.g. social security systems. On the other hand, they have drawbacks such as being harder to quantify. Therefore, it’s useful to have both types of models — those assuming infinitely lived households and those modelling overlapping generations — and use them to answer different types of questions. Or to put it in terms of Borges’ short story, different maps are useful for different purposes.

[1] I say ‘macroeconomists’ because I’ll focus on this branch of economics as the most frequently attacked discussed one — and because it’s my main area of interest. Moreover, the description is most relevant for theoretical work, which, as some like to point out, has declined in popularity relative to empirical work.

On Maps and Economics

This post and the following two are primarily aimed at non-economists.

…In that Empire, the Art of Cartography attained such Perfection that the map of a single Province occupied the entirety of a City, and the map of the Empire, the entirety of a Province. In time, those Unconscionable Maps no longer satisfied, and the Cartographers Guilds struck a Map of the Empire whose size was that of the Empire, and which coincided point for point with it. The following Generations, who were not so fond of the Study of Cartography as their Forebears had been, saw that that vast map was Useless, and not without some Pitilessness was it, that they delivered it up to the Incumbencies of Sun and Winters.
This short story ‘On Exactitude in Science’, by Jorge Luis Borges may be helpful in conveying to non-economists what economists do — which is what this and a few follow-up posts are aimed at. In short, economists build maps.

But let me take a few steps back first. Publishing articles full of economics-bashing appears to be a profitable or otherwise satisfying business, at least judging by the plethora of pieces of this kind in the Guardian. Thus, “[neoclassical] economics has become an unquestioned belief system and treats anybody who challenges the creed of self-righting markets and rational consumers as dangerous heretics,” claims its economics editor Larry Elliot in the most instance which ticks pretty much all the boxes of Chris Auld‘s list of “18 signs you’re reading bad criticism of economics’ (for balance, here’s a reply to Auld).

Others are better placed to provide a defence of economics or to provide a cogent assessment of the ‘state of economics’ — for some efforts in this direction see e.g. pieces by Noah Smith or Ricardo Reis (to mention but a few) and in response to this particular piece in the Guardian this excellent response from Attanasio et al in Prospect Magazine (also see Frances Coppola’s somewhat more direct response). However, having the privilege of interacting with fellow postgrad students from a range of disciplines might help me appreciate what the stumbling blocks are for non-economists in understand what economists do. Hence, here are three pieces addressed to non-economists that try to shed some light on this question. The first one discusses at a mostly abstract level “what economists do” — or perhaps more accurately, what their models are supposed to do (i thus omit a large part of economists’ work such as spending months collecting data or long nights cleaning data). The second and third will illustrate this discussion by zooming in on three absurd-sounding assumptions frequently made in macroeconomics and explains why they can nevertheless serve as useful starting points for a cogent analysis of an economic question.

I reckon many people’s sense of what economists do is shaped by business and City economists, making statements like “we forecast the US economy to grow by 3% next year” (Another misconception is to confound economists with business people or accountants who try to figure out how to get the most money out of a business).  In reality, however, forecasting is only a very small part of the economist’s goal or toolbox. Their actual focus is on understanding what the basic mechanisms are that shape economic interactions. Furthermore, macroeconomics — the branch of economics focused on the aggregate economy,  — is not all of economics, although it’s getting the most public attention. There’s a lot of other stuff out there that’s also being studied by economists. As Attanasio et al write: “We analyse [data] to understand how people make choices, because that determines how they respond to policies and how they interact. You can ask us about taxes, social mobility, inequality, crime, poverty alleviation, pensions, roads, sanitation, public safety, and, obviously, wine, beer and cider prices.”

The second, and perhaps more interesting stumbling block is that to many external observers, including the media commentariat, economists appear to create parallel worlds in their models by making absurd assumptions, add a layer of maths over such a parallel world, and then draw conclusions from this black box that has significant implications for public policy. Certainly, there’s some truth to that. However, to properly assess which assumptions are absurd or not in what context, the necessary first step is to appreciate what economic models are supposed to do. To me, the short story “On Exactitude in Science” by Borges, which I introduced at the outset, is still the best analogy I know to convey the answer to this question.

Stating that “all models are simplifications”, which is often taken to be the four-word essence of the story in discussions about methodology, is not a sufficient response to criticisms of the methodology of economics. But it’s nevertheless an essential point to grasp when trying to understand what (many) economists are actually doing. The next step, which is not revealed by the story, is to appreciate the difficult tradeoffs to make, the fine line to walk when choosing between which assumptions to make in what context — the decisions involved when designing a new map or choosing a particular type of map off the shelf, so to speak. To extend the analogy somewhat further, different types of maps are used for different purposes. Thus, political maps don’t show topographic features like mountains but they do show state and national boundaries. Meanwhile, climate maps show information about the climate of an area, while road maps show major lines of transport etc. Some maps have high resolutions, others have low resolutions — and each map may be useful for a different purpose. Deciding which features are worthwhile to highlight and, crucially, which map to use for what purpose, that’s the ‘art’ part in Keynes’ description of economics as “the science of thinking in terms of models joined to the art of choosing models which are relevant to the contemporary world.” (Dani Rodrik’s Economics Rules is excellent on the latter point in particular, if you’re eager for more details). In other words, the goal of economists’ mathematical models is to zoom in on particular aspects of reality (create a particular kind of map) without getting confused by the near-infinite complexity that characterises the world around us and human interactions in particular.

To get a more practical sense of #WhatEconomistsReallyDo, take a look at Quartz’s piece “13 economists on the research that shaped our world in 2017”. And to see some examples of why economists make crazy-sounding assumptions when designing their maps: my next post should be out soon.

P.s.: For the reader well-versed in economics, I apologise in advance for the pain they might experience when reading some of the simplifications I use, or things I omit. That said, I’m pretty sure it’d be more painful if I didn’t make them and forced a non-economist to read this nonetheless.

Efficiency and equality (3): The IMF and a look at some policies

In the preceding two posts I looked at two different reasons why the “Big Tradeoff” between equality and efficiency might not be so great after all. In the first blogpost I suggested that neither theory nor evidence unambiguously suggest that tax-and-redistribute interventions have detrimental effects on growth; instead, they might promote both equality and efficiency. I next proposed that this goal might be achieved by different types of policies that relate to an economy’s institutional set-up.

It’s a neat coincidence that my blog posts coincide with the release of the IMF’s October 2017 Fiscal Monitor, which also looks at the relationship between inequality and growth. Thus, it is argued that “[a] diversity of experiences and empirical analysis suggest that there is no systematic adverse trade-off between increasing growth and decreasing inequality.” In particular, it looks at fiscal policies that can help achieve redistributive objectives – tax rates at the top of the income distribution; the introduction of a universal basic income; and the role of public spending on education and health – and makes the case, amongst others, that there is scope for advanced economies with relatively low levels of progressivity in their personal income tax code “for raising the top marginal tax rates without hampering economic growth” (see chart). The Guardian unsurprisingly has an euphoric take on this; the Telegraph highlights equally unsurprising backlash from the US administration.
imf_fiscalmonitor_infographic

This third blog post in my mini-series goes in a similar direction insofar as I briefly summarise three illustrative policies (not discussed by the IMF) that might help both equality and growth. These ideas are neither novel nor am I full-heartedly endorsing them – after all, there are considerations other than equity and efficiency (not to mention that the complexity of economies means that predicting the effects of major policies is inevitably associated with considerable uncertainty). However, they should certainly be part of the policy discussion and exemplify the kind of policies I was talking about in my second post in this mini-series.

Let’s start with tax policy – after all, it’s all the rage in the US capital. There is a good case to be made in favour of abolishing the tax deductability of corporate interest as well as that of mortgage interest. The former (“CID”) allows companies to deduct interest payments, but not dividends, from corporate profits before tax is paid while the latter (“MID”) permits homeowners to deduct the mortgage interest they pay from their taxable income. Both schemes tend to encourage greater leverage, thereby making the economy more prone to boom and bust – which in turn hurts growth. Moreover, MID incentivizes wealthier households to purchase more expensive homes and helps drive a price spiral for land and real estate (think of raising demand in the context of fixed or very inelastic supply). As argued in the preceding post, such investments in land are unlikely to be conducive to growth but simultaneously increase wealth inequality which spills over into income inequality. Because of the gap it creates between equity and debt instruments, MID tilts investment away from e.g. the equity market towards unproductive assets like land. Abolishing these deductions would generate large revenues for the government. For instance, the Economist quotes a figure of additional revenue generated by abolishing CID in the US equal to around 1.6% of GDP as of 2013. Such revenue could be used to fund a cut in corporate as well as income taxes (and their balance could be designed so as to reduce inequality) or, alternatively finance e.g. infrastructure investments.

It follows from the above that abolishing MID would reduce demand for land and real estate. If the goal is to provide affordable housing, then restraining the zoning laws and other restrictions on housing supply and land use is an additional, complementary approach which plausibly promotes both growth and equality. These regulations are not without economic justification. Instead, they’re supposed to eliminate negative externalities arising from the proximity of incompatible land uses – a dirty factory next to a family house is a drastic example that gets at the basic idea. Nowadays, however, they arguably owe their (continued) existence more to interest group politics than to economic rationalism – think of the motivation of owners of houses in London’s greenbelt. Moreover, as Noah Smith notes, the restrictive effects of land use regulations are especially problematic when a shift from manufacturing towards knowledge-based industries means that the economy depends increasingly on highly concentrated urban clusters. To quote from a paper by Ed Glaeser and Joe Gyourko which Smith references in his article, “Higher economic productivity in San Francisco now leads to higher prices, not more homes and more workers”. Meanwhile, in his job market paper, Andrii Parkhomenko looks at housing supply regulations in the US. His model suggests that “the rise in regulation accounts for 23% of the increase in wage dispersion and 85% of the increase in house price dispersion across metro areas from 1980 to 2007.” Moreover, in the counterfactual scenario without the increase in regulations, “more workers would live in productive areas and output would be 2% higher.” ,

Third, recall that I noted that the “too big to fail” phenomenon is probably to the detriment of both efficiency and equality. As Anat Adamati and Martin Hellwig as well as many others have argued, the most straightforward way to get rid of too big to fail – and thus the associated subsidy to bankers which tends to raise inequality plus the increased risk of financial recessions which hurt growth – is to raise the equity capital required of systemically important financial institutions. John Cochrane has an excellent explanation of the likely effects this would have. The most important point is that it eliminates the risk of bank runs and, therefore, the need for states to stand behind banks and bail them out if necessary. Consequently, the moral hazard problem of taking excessive risks but “privatising the losses”, to use a catchphrase, would be reduced.[1]

I’ve deliberately picked three policies that have, I would claim, a fairly broad support among economists. All three are, I would purport, likely to improve economic outcomes both in the efficiency and the equity dimension – even though that might not seem to be the case at first sight (which, I would claim, is one difference between my discussion and that by the IMF). If the higher-level questions about the inescapability of the “Big Tradeoff” haven’t convinced you, perhaps these examples do.

[1] Cochrane and some others such as as Adair Turner and John Kay – coming from different directions, as Felix Martin notes – go even further and endorse “narrow banking”. This is similar to the “Chicago Plan”, which Kumhof and Benes revisit here. It’s a very interesting proposal, but I’m still only at the beginnings of making up my mind.

Efficiency and equality (2): it’s not all about redistribution

In the previous blog post I drew attention to research suggesting that, on the whole, greater redistribution might actually improve equality and efficiency (the latter being proxied by growth), and that the empirical evidence suggested a fortiori that inequality had a negative effect on growth. I contrasted this with the “received view” that there is an inevitable tradeoff between the two goals, as articulated most famously in Arthur Okun’s Equality and Efficiency: The Big Tradeoff. This time I want to highlight that there measures other than “tax and transfer” that may reduce inequality and improve growth.

Let’s look at this problem at a conceptual level first. I find it helpful to think about this using a simple two-person (or two-group) graph like this.

chart

One way of reading Okun is that he assumes we’re at a point on the Pareto frontier like A [1] and that redistribution from 2 to 1, say, not only makes 2 worse off, but that in addition, resources go lost in the process. That is, we don’t just move along the curve but we’re also moving south-west. What I discussed in the previous post is, very roughly, the possibility that even though this might be the case, greater equality might actually push out the Pareto frontier to the north-east such that there might be more cake to distribute between individual 1 and 2. Or, perhaps more accurately, if we think of an economy’s growth rate as the speed with which the frontier moves to the north-east, then a measure reducing inequality might increase the speed with which the frontier travels outwards. Note that in making this point I’ve introduced a dynamic dimension to the problem.

More generally, there are two categories of measures that improve both growth and equality.
(1) Suppose we’re initially at a point like B, i.e. we’re not at the efficiency frontier. Then there might be measures that move us closer to the frontier whilst also reducing inequality. Note that this category refers to ‘static’ comparisons. I provide some examples below.
(2) Regardless of whether we’re initially at a point like A or one like B, some policies might allow us to increase equality whilst simultaneously raising the speed at which the ‘cake’ grows. In contrast to (1), these effects are dynamic in nature. The human-capital channel introduced in the previous post presumably fits into this category insofar as it raises the economy’s growth potential over time.

Let’s move from concepts to the real world. As evidenced by the relatively recent panel discussion hosted by the Brookings Institution mentioned (major conclusions are neatly summarised by Martin Sandbu here and Brad De Long here) the focus has shifted to explaining unequal incomes in terms of rents, i.e. returns over and above what would occur if a market functioned as in the neoclassical paradigm. This is significant because these rigidities and rent-seeking produce both inequality and harm economic efficiency, as Mark Thoma notes . And it highlights possibilities to improve equality and efficiency essentially by removing these distortion and re-resigning the “rules of the game” (rather than tax and transfer). I’ll focus on identifying a few important examples of such inefficiencies in this post before addressing some policy proposals in the next one.

The first area concerns market power. Here’s one piece of evidence. Corporate profits are at record levels, which at least in one dimension (capital vs labor) raises inequality. If markets were competitive, this would also mean really high interest rates. But this is the opposite of what we observe, as Larry Summers noted in his comments at the Brookings event (see also Jason Furman and Peter Orszag here). Instead, these profit levels are likely at least in part due to high rents – which reflect economic inefficiency. Indeed, the economics profession is increasingly aware of rising levels of monopolistic power and its implications for inequality (see e.g. the recent paper by Mordecai Kurz, explained in this Project Syndicate piece). In part, concentration of power is due to structural/technological factors. But to the extent that political influence causes a transfer of resources to a smaller group of corporations, this may reduce both equality and reduce productive capacities. At a theoretical level — non-economists, perhaps jump to the next paragraph — if you look e.g. at endogenous growth model of the expanding variety type, then monopoly power creates two distinct distortions. Not only is an inefficiently low quantity of intermediate goods produced, the growth in variety of those goods is also inefficiently restricted.

Second, developments relating to land and real estate may hold back growth as well as reducing equality. As Stiglitz notes, “capital” and “wealth” are distinct. Indeed, a significant proportion of the increase in wealth over the past decades is accounted for by an increase in the value of land rather than greater amounts of capital. Given the ownership concentration of land increases in land prices tend to be associated with greater wealth inequality. Moreover, as Adair Turner notes, because income includes (imputed and actual) rent for the owners of real estate, a more unequal distribution of wealth spills into a more unequal distribution of income. Finally, to the extent that investments in lands – and real estate built on it (see below for one factor contributing to real estate booms) – crowd out investment in real productive capacity, this increase also goes hand in hand with lower productive capacity.

Third, much attention is devoted to the role of finance. One strand of the literature discusses the nexus of finance, inequality and growth as it relates to macroeconomic (in-) stability and the global financial crisis. In a nutshell, the argument here is that prior to the financial crisis the effects of rising levels of inequality and stagnating income levels were temporarily camouflaged by rising levels of household debt, allowing households to maintain or increase consumption despite lack of income growth. The demand for credit was satisfied by a ballooning financial sector, which benefited in form of high and rising profits. In the U.S., as Richard Deeg notes, the financial sector’s share of all profits rose from 10 percent in the early 1980s to 40 percent in 2007). Ultimately, however, high leverage and a large and complex financial sector were among the main factors causing the financial crisis, which caused a large loss of income (static) and lower levels of growth (dynamic). Another strand of this literature looks at pay and related measures. For instance, Philippon and Reshef estimate that between a third and half of Wall Street’s higher pay derive from rents rather than productivity. As Martin Wolf discusses, one source is the implicit subsidy through lower borrowing costs that banks enjoy by being too big to fail. Again, it is likely that this is associated with both higher inequality and lower economic efficiency. A recent study coming from the Bank of International Settlements goes further, suggesting that the rising financial sector’s share of GDP is one causal driver not just of inequality but also of declining productivity growth – recall the idea of different speeds at which the efficiency frontier is travelling – and hence living standards in the rich world. The reason is that, as a matter of risk management, finance favours activities that can pledge physical collateral, notably construction (hence the concentration in mortgage lending e.g. in the US and the European periphery), but those activities have notoriously low productivity growth, whereas R&D intensive sectors remain underfunded because they frequently lack collateral to secure loans.

This discussion highlights at least three ways in which thinking about effects of inequality have changed a lot over the past decade or so. First, “the famed growth-equality trade-off looks less and less like one”, as Martin Sandbu puts it. Second, a significant reason for this is that the focus is less on redistribution through tax-and-transfer measures; instead, more attention is given to the “rules of the game” which determine market incomes in the first place. A final, distinct but related point, is that these market incomes are seen increasingly as determined not so much impersonally by perfectly functioning markets (the neoclassical view), but  at least a significant share of incomes is determined by political or economic power, imperfect information, rent-seeking etc. Put differently, in many areas we may well have fallen far behind the frontier(s).

In the next post I’ll take a brief look at some possible policies that might enhance both equality and efficiency.

[1] That is to say that there’s no way of making one individual better off without making another individual worse off.

Efficiency and equality (1): The (not so) Big Tradeoff

Forty years ago Arthur M. Okun (1928-1980) published an influential book, Equality and Efficiency: The Big Tradeoff. In that book, he affirmed that “efficiency is bought at the cost of inequalities in income and wealth”, thus suggesting that societies faced a “Big Tradeoff”. This claim still appears to form the implicit premise underlying much of public policy-making. Whether it is true, or the extent to which it is true, has great significance for debates about rising inequality and slowing growth. In this blog post I summarise reasons for thinking that this long-held belief in a pervasive tradeoff is not well-supported by theoretical and empirical evidence. Think of it as a primer laying the groundwork for a couple of upcoming posts.

 

Especially since the publication of Thomas Piketty’s Capital the topic of social inequality has risen in the ranks of topics debated in both academic literature and the broader public. At the same time, growth has slowed down in most rich capitalist countries since the Great Recession, the worry being increasingly that the underlying factors are not just of cyclical but also structural nature (whether due to demand-led “secular stagnation” as suggested by Larry Summers or supply side/productivity problems as articulated most prominently by Robert Gordon and summarised by Martin Wolf here). Clearly, both phenomena pose significant challenges for public policy: we want living standards to rise, and we want such growth to be “inclusive“.  If Okun is right, then pursuing greater equality as a goal of policy-making conflicts with the promotion of economic growth.

 

Consider first reasons for thinking that Okun was right. When governments attempt to tax the rich to transfer income to the poor, Okun wrote, “money must be carried … in a leaky bucket. Some of it will disappear in transit, so the poor will not receive all the money that is taken from the rich.” (p.12).  Okun attributed these losses to the administrative costs of taxing and transferring, and to disincentive effects (in the labour supply and savings) – concepts and arguments well-established in the economic literature [1] For a more detailed, non-technical explanation of Okun’s reasoning, take a look at this piece by Noah Smith.

 

It is in part due to the neat fit between Okun’s argument and economists’ preferred way of thinking – in terms of incentives – that for a long time, the existence of such a tradeoff was more or less an unquestioned fact in academia and policy-making. More recently, however, doubts have become increasingly prominent.  One reason for this change is that consensus estimates of the “allocative efficiency costs” of income taxation – that is the extent to which such taxation distorts the allocation of resources like financial and physical capital or labour – have become smaller [2]. But note that from this fact alone, we would still infer that there is a tradeoff, albeit a quantitatively small(er) one. And indeed, to my knowledge, nobody seriously disputes that there is such a tradeoff somewhere  when it comes to tax and transfer policy – nobody is arguing that, say, income tax rates of 90% would not suffer significantly from Okun’s “leaky bucket” problem.

 

To better understand why some economists now believe that the tradeoff is not only smaller than hitherto presumed, it is important to distinguish between two different concepts, viz., between the immediate economic costs of redistribution considered by Okun, on the one hand, and the direct effects of inequality on growth, on the other hand. This helpful distinction is drawn by Messrs Berg, Ostry and Tsangarides in an important IMF discussion note (see also this older, brief piece by Berg and Ostry). To put the same thought differently, redistribution may affect growth directly  – the idea captured by Okun’s leaky bucket experiment – as well as indirectly through the effect of net inequality on growth. This is illustrated in the following figure (a more detailed version of which appears in the IMF paper linked to above).

relation
Okun’s argument only considers the direct effect. By looking at indirect effects, we can identify a plurality of potential ways in which inequality – again, I’m not talking about redistribution for now – might actually reduce growth. In fact, Okun himself hints at this possibility, too. (p.10, [3]), but without elaborating. Typically, the mechanisms underlying such (indirect) effects are rooted in market failure of one form or another (a thought I’ll return to in a future post) Most prominently, what goes under the name of “human capital accumulation theory” suggests that the presence of market imperfections implies that individuals’ ability to invest depends on their income or wealth. If this is the case (and in particular, if capital markets don’t facilitate borrowing even if projects are worthwhile on a risk-return basis) poor individuals may not be able to afford worthwhile investments – think full-time education like secondary school or grad school (which I refuse to believe isn’t productivity-enhancing!) or professional training – even though the rate of returns is high [3]. Indeed, there is lots of research suggesting that the importance of education has been reinforced by skill-biased technological change. In turn, under-investment by the poor implies that growth would be lower than in the case of perfect financial markets. High inequality might, therefore, to the extent that it creates or exacerbates inefficiencies in human capital investments, be detrimental for growth. Note that inequality (a static concept) thus also affects social mobility (a dynamic concept). Alan Krueger in 2012 calls this relationship the “Great Gatsby Curve” (Paul Krugman has a brief account here). So much for theory. What about the empirical evidence – after all we want to be good, evidence-grounded economists. Well, while these are new-ish results that demand further investigation, several recent studies have produced consistent evidence that inequality reduces growth and the duration of growth spells. If you don’t mind some figures, read on, otherwise jump to the next paragraph: Berg et al estimate that, on average, a rise of 5 Gini points reduces average annual growth by half a percentage point. A recent OECD study  makes similar estimates; expressed retrospectively, it finds that the rise in inequality between 1985 and 2005 in 19 OECD countries (including e.g. US, UK, Germany, France, Sweden, Denmark etc) has reduced cumulative growth between 1990 and 2010 by sizable 4.7 percentage points.

 

What lessons should we draw from theory and evidence? First of all, if we recall the distinction drawn between redistribution and inequality (and their respective relationship to growth), then the following is immediate: even if there was no tradeoff between equality and efficiency, and perhaps even a positive relationship from the former to the latter, there could be one between redistribution and efficiency, provided the direct effect of redistribution is sufficiently negative, i.e. large enough to offset the positive effect of equality on growth introduced above [5]. Hence, the empirical evidence cited so far does not, by itself mean that tax-and-transfer redistribution – indeed, any policy involving taxation – does not involve a tradeoff. The direct effect would be negative if, for example, high(er) levels of redistribution imply a very leaky bucket, to use Okun’s metaphor. In this case, if this negative effect outweighs the indirect positive effect, redistribution would still involve a tradeoff. In a future blog post I will suggest that even such a configuration need not necessitate pessimism from a policy-making perspective, because there are tools other than redistribution to affect inequality. In fact, however, consistent with the “separate” results for the effects of equality (promoting growth) and taxation (reducing growth only by little) presented in the above paragraphs, both the 2014 piece by Berg et al and the OECD study mentioned find that the combined direct and indirect effects  of redistribution are on average [6] pro-growth.[7]

 

What should we take away from this? A cautious interpretation is that the results imply that policymakers should stop assuming that redistributive measures inevitably conflict with the goal of greater efficiency. A concomitant cautious recommendation that should — yes, I know, “should” and “politicians” have a strained relationship in the recent past — be acceptable to policy-makers of any ideological conviction is that any specific policy should be evaluated carefully on its own terms rather than rejected or accepted on the basis of a tight prior belief in the ‘Big Tradeoff’. As the results introduced above are averages, it is plausible that the impact of different redistributive measures on efficiency and equality is in practice likely to vary, both in terms of sign and magnitude. For instance, a redistributive policy promoting educational outcomes is more likely to have positive growth effects than a purely consumption-oriented, holding the redistributive effect of the two measures constant. In other words, this post is a call for more diffuse priors and less reliance on established beliefs in necessary tradeoffs.

 

In the next post I’ll look at the relationship between inequality and growth from a different angle – an ‘institutionalist’ perspective, if the label is helpful – focusing on particular on what’s sometimes called the ‘rules of the game’, i.e. regulation of markets and other aspects of the economic and legal framework.

Footnotes

[1] These thoughts were subsequently taken up and formalised by researchers writing in the neoclassical paradigm, examples being Mirrlees (1971) and Lazear/Rosen (1981).
[2] See e.g. this meta-study by Saez et al (2012).
[3] “[Obvious cases of inequality of opportunity] can be reduced by public policies that would enhance both equality of income and efficiency. In such cases, society can get more of two good things rather than sacrificing one for the other.” (p.10)
[4] Both the rate of return to the individual – which together with costs and affordability influences whether the individual pursues the option or not – or the rate of return to society – which impacts the overall growth rate of the economy alongside a myriad of other micro decisions.
[5] To be precise, there’s possibly a difference between saying ‘greater inequality reduces growth’ and ‘greater equality increases growth’ unless we’re willing to smuggle in assumptions about symmetry and/or linearity.
[6] “On average” is another important qualifier: I do not address questions of cross-sectional or time- and state-dependence heterogeneity here in any detail.
[7] Berg et al. state: “[R]edistribution appears generally benign in terms of its impact on growth; only in extreme cases is there some evidence that it may have direct negative effects on growth . Thus the combined direct and indirect effects of redistribution — including the growth effects of the resulting lower inequality — are on average pro-growth.”