Monday, May 30, 2016

The incredible miracle in poor country development


The amazing improvement in the quality of life of the world's poor people should be common knowledge by now. For example, you have the now-famous "elephant graph", by Branko Milanovic, showing recent income growth at various levels of the global income distribution:


This graph shows that over the last three decades or so, the global poor and middle class reaped huge gains, the rich-country middle-class stagnated, and the rich-country rich also did quite well for themselves.

You also have the poverty data from Max Roser, showing how absolute poverty has absolutely collapsed in the last couple of decades, both in percentage terms and in raw numbers of humans suffering under its lash:


This is incredible - nothing short of a miracle. Nothing like this has ever happened before in recorded history. With the plunge in global poverty has come a precipitous drop in global child mortality and hunger. The gains have not been even - China has been a stellar outperformer in poverty reduction - but they have been happening worldwide:


The fall in poverty has been so spectacular and swift that you'd think it would be a stylized fact - the kind of thing that everyone knows is happening, and everyone tries to explain. But on Twitter, David Rosnick strongly challenged the very existence of a rapid recent drop in poverty outside of China. At first he declared that the poor-country boom was purely a China phenomenon. That is, of course, false, as the graphs above clearly show. 

But Rosnick insisted that poor-country development has slowed in recent years, rather than accelerated, and insisted that I read a paper he co-authored for the think tank CEPR, purporting to show this. Unfortunately this paper is from 2006, and hence is now a decade out of date. Fortunately, Rosnick also pointed me to a second CEPR paper from 2011, by Mark Weisbrot and Rebecca Ray, that acknowledges how good the 21st century has been for poor countries:
The paper finds that after a sharp slowdown in economic growth and in progress on social indicators during the second (1980-2000) period, there has been a recovery on both economic growth and, for many countries, a rebound in progress on social indicators (including life expectancy, adult, infant, and child mortality, and education) during the past decade. 
Weisbrot and Ray, averaging growth across country quintiles, find the following:


By their measure, the 2000-2010 decade exceeds or ties the supposed golden age of the 60s and 70s, for all but the top income quintile.

I'm tempted to just stop there. First of all, because Weisbrot and Ray are averaging across countries, rather than across people (as Milanovic does), China is merely one single data point among hundreds in their graph above. So the graph clearly shows that Rosnick is wrong, and the recent unprecedented progress of global poor countries is not just a China story. Case closed.

But I'm not going to stop there, because I think even Weisbrot and Ray are giving the miracle short shrift, especially when it comes to the 1980s and 1990s. 

See, as I mentioned, Weisbrot and Ray weight across countries, not across people:
Finally, the unit of analysis for this method is the country—there is no weighting by population or GDP. A small country such as Iceland, with 300,000 people, counts the same in the averages calculated as does China, with 1.3 billion people and the world’s second largest economy. The reason for this method is that the individual country government is the level of decision-making for economic policy. 
Making Iceland equal to China might allow a better analysis of policy differences (I have my doubts, since countries are all so different). But it certainly gives a very distorted picture of the progress of humankind. Together, India and China contain well over a third of humanity, and almost half of the entire developing world. 

And when you look at the 1980s and 1990s, you see that the supergiant countries of India and China did extremely well during those supposedly disastrous decades. Here's Indian real per capita GDP:


As you can see, during the 1960s, India's GDP increased by a little less than a third - a solid if unspectacular performance. From 1970 to 1980 it increased by perhaps a 10th - near-total stagnation. In the 1980s, it increased by a third - back to the same performance as the 60s. In the 1990s, it did even better, increasing by around 40 percent. And of course, in the 2000s, it has zoomed ahead at an even faster rate.

So India had solid gains in the 80s and 90s - only the new century has seen more progress in material living standards. Along with Indian growth, as you might expect, has come a huge drop in poverty.

Now China:


The 60s were a disaster for China, with GDP essentially not increasing at all. It's hard to see from the graph, but the 70s were actually great, with China's income nearly doubling. The 80s, however, were even better, with GDP more than doubling. The 90s were similarly spectacular. And of course, rapid progress has continues in the new century.

So for both of the supergiant countries, the 80s and 90s were good years - better than the 60s and 70s. Collapsing these billions of people into two data points, as Weisbrot and Ray do, turns these miracles into a seeming disaster, but the truth is that as go India and China, so goes the developing world. 

Now let's talk about policy.

My prior is that the 80s and 90s look bad for poor countries in Weisbrot and Ray's dataset - and the 70s look great - because of natural resource prices. Metals prices rose steadily in the 60s, surged in the 70s, then collapsed in the 80s and 90s:

Oil prices didn't rise in the 60s, but boy did they soar in the 70s and collapse in the 80s and 90s:


The same story is roughly true of other commodities.

My prior is that the developing world contains a large number of small countries whose main industry is the export of natural resources, and whose economic fortunes rise and fall with commodity prices. Just look at this map of commodity importers and exporters, via The Economist:


Yep. Most of the countries in the developing world are commodity exporters...with the huge, notable exceptions of China and India.

So I strongly suspect that Weisbrot and Ray's growth numbers are mostly just reflections of rising and falling commodity prices. Averaging across countries, rather than people, essentially guarantees that this will be the case.

Do Weisbrot and Ray recognize this serious weakness in their method? The authors mention commodity prices as an explanation for the fast developing-country growth of 2000-2010, but completely fail to bring it up as an explanation for the growth during 1960-1980. In fact, here is what they say:
[T]he period 1960-1980 is a reasonable benchmark. While the 1960s were a period of very good economic growth, the 1970s suffered from two major oil shocks that led to world recessions—first in 1974-1975, and then at the end of the decade. So using this period as a benchmark is not setting the bar too high. 
But the oil shocks, and the general sharp rise in commodity prices, should have helped most developing countries hugely, not hurt them! Weisbrot and Ray totally ignore this key fact about their "benchmark" historical period.

So I think the Weisbrot and Ray paper is seriously flawed. It claims to be able to make big, sweeping inferences about policy by averaging across countries and comparing across decades, but the confounding factor of global commodity prices basically makes a hash of this approach. (And I'm sure that's not the only big confound, either. Rich-country recessions and booms, spillovers from China and India themselves, etc. etc.)

As I see it, here is what has happened with poor-country development over the last 55 years:

1. Start-and-stop growth in China and India in the 60s and 70s, followed by steady, rapid, even accelerating growth following 1980.

2. Seesawing growth in commodity exporters as commodity prices rose and fell over the decades.

Of course, this means that some of the miraculous growth we've seen in the developing world since 2000 is also on shaky ground! Commodity prices have fallen dramatically in the last year or two, and if they stay low, this spells trouble for countries in Africa, Latin America, and the Middle East. Their recent gains were real, but they may not be repeated in the years to come.

But the staggering development of China and India - 37 percent of the human race - seems more like a repeat of the industrialization accomplished by Europe, Japan, Korea, etc. And although China is now slowing somewhat, India's growth has remained steady or possibly even accelerated.

So the miracle is real, and - for now, at least - it is continuing. 

Thursday, May 26, 2016

101ism, overtime pay edition


John Cochrane wrote a blog post criticizing the Obama administration's new rule extending overtime pay to low-paid salaried employees. Cochrane thinks about overtime in the context of an Econ 101 type model of labor supply and demand. I'm not going to defend the overtime rule, but I think Cochrane's analysis is an example of what I've been calling "101ism".

One red flag indicating that 101 models are being abused here is that Cochrane applies the same model in two different ways. First, he models overtime pay as a wage floor:


Then he alternatively models it as a negative labor demand shock:


Well, which is it? A wage floor, or a negative labor demand shock? The former makes wages go up, while the latter makes wages go down, so the answer is clearly important. If using the 101 model gives you two different, contradictory answers, it's a clue that you shouldn't be using the 101 model.

In fact, overtime rules are not quite like either wage floors or negative labor demand shocks. Overtime rules stipulate not a wage level, but a ratio between base wages and wages paid on hours worked per worker above a certain amount.

In the Econ 101 model of labor supply and demand, there's no distinction between the extensive and the intensive margin - hiring the same number of employees for fewer hours each is exactly the same as hiring fewer employees for the same number of hours each. But with overtime rules, those two are obviously not the same. For a given base wage, under overtime rules, hiring 100 workers for 40 hours each is cheaper than hiring 40 workers for 100 hours each, even though the total number of labor hours is the same. That breaks the 101 model.

With overtime rules, weird things can happen. First of all, base wages can fall while keeping employment the same, even if labor demand is elastic. Why? Because if companies fix the hours that their employees work, they can just set the base wage lower so that overall compensation stays the same, leading to the exact same equilibrium as before.

Overtime rules can also raise the level of employment. Suppose a firm is initially indifferent between A) hiring a very productive worker for 60 hours a week at $50 an hour, and B) hiring a very productive worker for 40 hours a week at $50 an hour, and hiring 2 less productive workers at 40 hours a week each for $25 an hour. Overtime rules immediately change that calculation, making option (B) cheaper. In general equilibrium, in a model with nonzero unemployment (because of reservation wages, or demand shortages, etc.), overtime rules should cut hours for productive workers and draw some less-productive workers into employment. In fact, this is exactly what Goldman Sachs expects to happen.

Now, to understand the true impact of overtime rules, we probably have to include more complicated stuff, like unobservable effort (what if people work longer but less hard?), laws regarding number of work hours, unobservable hours (since the new rule is for salaried employees), sticky wages, etc. But even if we want to think about the very most simple case, we can't use the basic 101 model, since the essence of overtime rules is to force firms to optimize over 2 different margins, and S-D graphs represent optimization over only 1 margin.

Using 101 models where they clearly don't apply is 101ism!

Monday, May 23, 2016

Theory vs. Evidence: Unemployment Insurance edition


The argument over "theory vs. evidence" is usually oversimplified and silly, since you need both to understand the world. But there is a sense in which I think evidence really does "beat" theory most of the time, at least in econ. Basically, I think empirical work without much theory is usually more credible than the reverse.

To show what I mean, let's take an example. Suppose I was going to try to persuade you that extended unemployment insurance has big negative effects on employment. But suppose I could only show you one academic paper to make my case. Which of these two papers, on its own, would be more convincing?


Paper 1: "Optimal unemployment insurance in an equilibrium business-cycle model", by Kurt Mitman and Stanislav Rabinovitch

Abstract:
The optimal cyclical behavior of unemployment insurance is characterized in an equilibrium search model with risk-averse workers. Contrary to the current US policy, the path of optimal unemployment benefits is pro-cyclical – positively correlated with productivity and employment. Furthermore, optimal unemployment benefits react nonmonotonically to a productivity shock: in response to a fall in productivity, they rise on impact but then fall significantly below their pre-recession level during the recovery. As compared to the current US unemployment insurance policy, the optimal state-contingent unemployment benefits smooth cyclical fluctuations in unemployment and deliver substantial welfare gains.

Some excerpts:
The model is a Diamond–Mortensen–Pissarides model with aggregate productivity shocks. Time is discrete and the time horizon is infinite. The economy is populated by a unit measure of workers and a larger continuum of firms...Firms are risk-neutral and maximize profits. Workers and firms have the same discount factor β...Existing matches [i.e., jobs] are exogenously destroyed with a constant job separation probability δ...All worker–firm matches are identical: the only shocks to labor productivity are aggregate shocks...[A]ggregate labor productivity...follows an AR(1) process...The government can insure against aggregate shocks by buying and selling claims contingent on the aggregate state...The government levies a constant lump sum tax τ on firm profits and uses its tax revenues to finance unemployment benefits...The government is allowed to choose both the level of benefits and the rate at which they expire. Benefit expiration is stochastic...


Paper 2: "The Impact of Unemployment Benefit Extensions on Employment: The 2014 Employment Miracle?", by Marcus Hagedorn, Iourii Manovskii, and Kurt Mitman

Abstract:
We measure the aggregate effect of unemployment benefit duration on employment and the labor force. We exploit the variation induced by Congress' failure in December 2013 to reauthorize the unprecedented benefit extensions introduced during the Great Recession. Federal benefit extensions that ranged from 0 to 47 weeks across U.S. states were abruptly cut to zero. To achieve identification we use the fact that this policy change was exogenous to cross-sectional differences across U.S. states and we exploit a policy discontinuity at state borders. Our baseline estimates reveal that a 1% drop in benefit duration leads to a statistically significant increase of employment by 0.019 log points. In levels, 2.1 million individuals secured employment in 2014 due to the benefit cut. More than 1.1 million of these workers would not have participated in the labor market had benefit extensions been reauthorized.

Some excerpts:
[W]e exploit the fact that, at the end of 2013, federal unemployment benefit extensions available to workers ranged from 0 to 47 weeks across U.S. states. As the decision to abruptly eliminate all federal extensions applied to all states, it was exogenous to economic conditions of individual states. In particular, states did not choose to cut benefits based on, e.g. their employment in 2013 or expected employment growth in 2014. This allows us to exploit the vast heterogeneity of the decline in benefit duration across states to identify the labor market implication of unemployment benefit extensions. Note, however, that the benefit durations prior to the cut, and, consequently, the magnitudes of the cut, likely depended on economic conditions in individual states. Thus, the key challenge to measuring the effect of the cut in benefit durations on employment and the labor force is the inference on labor market trends that various locations would have experienced without a cut in benefits. Much of the analysis in the paper is devoted to the modeling and measurement of these trends. 
The primary focus of the formal analysis in the paper is on measuring the counterfactual trends in labor force and employment that border counties would have experienced without a cut in benefits...The first one...allows for permanent (over the estimation window) differences in employment across border counties which could be induced by the differences in other policies (e.g., taxes or regulations) between the states these counties belong to. Moreover, employment in each county is allowed to follow a distinct deterministic time trend. The model also includes aggregate time effects and controls for the effects of unemployment benefit durations in the pre-reform period...The second and third models...reflect the systematic response of underlying economic conditions across counties with different benefit durations to various aggregate shocks and the heterogeneity is induced by differential exposure of counties to these aggregate disturbances. 

These two papers have results that agree with each other. Both conclude that extended unemployment insurance causes unemployment to go up by a lot. But suppose I only showed you one of these papers. Which one, on its own, would be more effective in convincing you that extended UI raises U a lot?

I submit that the second paper would be a lot more convincing. 

Why? Because the first paper is mostly "theory" and the second paper is mostly "evidence". That's not totally the case, of course. The first paper does have some evidence, since it calibrates its parameters using real data. The second paper does have some theory, since it relies on a bunch of assumptions about how state-level employment trends work, as well as having a regression model. But the first paper has a huge number of very restrictive structural assumptions, while the second one has relatively few. That's really the key.

The first paper doesn't test the theory rigorously against the evidence. If it did, it would easily fail all but the most gentle first-pass tests. The assumptions are just too restrictive. Do we really think the government levies a lump-sum tax on business profits? Do we really think unemployment insurance benefits expire randomly? No, these are all obviously counterfactual assumptions. Do those false assumptions severely impact the model's ability to match the relevant features of reality? They probably do, but no one is going to bother to check, because theory papers like this are used to "organize our thinking" instead of to predict reality.

The second paper, on the other hand, doesn't need much of a structural theory in order to be believable. Unemployment insurance discourages people from working, eh? Duh, you're paying people not to work! You don't need a million goofy structural assumptions and a Diamond-Mortensen-Pissarides search model to come up with a convincing individual-behavior-level explanation for the empirical findings in the second paper.

Of course, even the second paper isn't 100% convincing - it doesn't settle the matter. Other mostly-empirical papers find different results. And it'll take a long debate before people agree which methodology is better. 

But I think this pair of papers shows why, very loosely speaking, evidence is often more powerful than theory in economics. Humans are wired to be scientists - we punish model complexity and reward goodness-of-fit. We have little information criteria in our heads.


Update: Looks like I'm not the only one that had this thought... :-)

Also, Kurt has a new discussion paper with Hagedorn and Manovskii, criticizing the methodology of some empirical papers that find only a small effect of extended UI. In my opinion, Kurt's team is winning this one - the method of identifying causal effects of UI on unemployment using data revisions seems seriously flawed.

Friday, May 20, 2016

What's the difference between macro and micro economics?


Are Jews for Jesus actually Jews? If you ask them, they'll surely say yes. But go ask some other Jews, and you're likely to hear the opposite answer. A similar dynamic tends to prevail with microeconomists and macroeconomists. Here is labor economist Dan Hamermesh on the subject:
The economics profession is not in disrepute. Macroeconomics is in disrepute. The micro stuff that people like myself and most of us do has contributed tremendously and continues to contribute. Our thoughts have had enormous influence. It just happens that macroeconomics, firstly, has been done terribly and, secondly, in terms of academic macroeconomics, these guys are absolutely useless, most of them.
Ouch. But not too different from lots of other opinions I've head. "I went to a macro conference recently," a distinguished game theorist confided a couple of years back, sounding guilty about the fact. "I couldn't believe what these guys were doing." A decision theorist at Michigan once asked me "What's the oldest model macro guys still use?" I offered the Solow model, but what he was really claiming is that macro, unlike other fields, is driven by fads and fashions rather than, presumably, hard data. Macro folks, meanwhile, often insist rather acerbically that there's actually no difference between their field and the rest of econ. Ed Prescott famously refuses to even use the word "macro", stubbornly insisting on calling his field "aggregate economics".

So who's right? What's the actual distinction between macro and "micro"? The obvious difference is the subject matter - macro is about business cycles and growth. But are the methods used actually any different? The boundary is obviously going to be fuzzy, and any exact hyperplane of demarcation will necessarily be arbitrary, but here are some of what I see as the relevant differences.


1. General Equilibrium vs. Game Theory and Partial Equilibrium

In labor, public, IO, and micro theory, you see a lot of Nash equilibria. In papers about business cycles, you rarely do - it's almost all competitive equilibrium. Karthik Athreya explains this in his book, Big Ideas in Macroeconomics:
Nearly any specification of interactions between individually negligible market participants leads almost inevitably to Walrasian outcomes...The reader will likely find the non-technical review provided in Mas-Colell (1984) very useful. The author refers to the need for large numbers as the negligibility hypothesis[.]
Macro people generally assume that there are too many companies, many consumers, etc. in the economy for strategic interactions to matter. Makes sense, right? Macro = big. Of course there are some exceptions, like in search-and-matching models of labor markets, where the surplus of a match is usually divided up by Nash bargaining. But overall, Athreya is right.

You also rarely see partial equilibrium in macro papers, at least these days. Robert Solow complained about this back in 2009. You do, however, see it somewhat in other fields, like tax and finance (and probably others).


2. Time-Series vs. Cross-Section and Panel

You see time-series methods in a lot of fields, but only in two areas - macro and finance - is it really the core empirical method. Look in a business cycle paper, and you'll see a lot of time-series moments - the covariance of investment and GDP, etc. Chris Sims, one of the leading empirical macroeconomists, won a Nobel mainly for pioneering the use of SVARs in macro. The original RBC model was compared to data (loosely) by comparing its simulated time-series moments side by side with the empirical moments - that technique still pops up in many macro papers, but not elsewhere. 

Why are time-series methods so central to macro? It's just the nature of the beast. Macro deals with intertemporal responses at the aggregate level, so for a lot of things, you just can't look at cross-sectional variation - everyone is responding to the same big things, all at once. You can't get independent observations in cross section. You can look at cross-country comparisons, but countries' business cycles are often correlated (and good luck with omitted variables, too). 

As an illustration, think about empirical papers looking at the effect of the 2009 ARRA stimulus. Nakamura and Steinsson - the best in the business - looked at this question by comparing different states, and seeing how the amount of money a state got from the stimulus affected its economy. They find a large effect - states that got more stimulus money did better, and the causation probably runs in the right direction. Nakamura and Steinsson conclude that the fiscal multiplier is relatively large - about 1.5. But as John Cochrane pointed out, this result might have happened because stimulus represents a redistribution of real resources between states - states that get more money today will not have to pay more taxes tomorrow, to cover the resulting debt (assuming the govt pays back the debt). So Nakamura and Steinsson's conclusion of a large fiscal multiplier is still dependent on a general equilibrium model of intertemporal optimization, which itself can only be validated with...time-series data.

In many "micro" fields, in contrast, you can probably control for aggregate effects, as when people studying the impact of a surge of immigrants on local labor markets use methods like synthetic controls to control for business cycle confounds. Micro stuff gets affected by macro stuff, but a lot of times you can plausibly control for it.


3. Few Natural Experiments, No RCTs

In many "micro" fields, you now see a lot of natural experiments (also called quasi-experiments). This is where you exploit a plausibly exogenous event, like Fidel Castro suddenly deciding to send a ton of refugees to Miami, to identify causality. There are few events that A) have big enough effects to affect business cycles or growth, and B) are plausibly unrelated to any of the other big events going on in the world at the time. That doesn't mean there are none - a big oil discovery, or an earthquake, probably does qualify. But they're very rare. 

Chris Sims basically made this point in a comment on the "Credibility Revolution" being trumpeted by Angrist and Pischke. The archetypical example of a "natural experiment" used to identify the impact of monetary policy shocks - cited by Angrist and Pischke - is Romer & Romer (1989), which looks at changes in macro variables after Fed announcements. But Sims argues, persuasively, that these "Romer dates" might not be exogenous to other stuff going on in the economy at the time. Hence, using them to identify monetary policy shocks requires a lot of additional assumptions, and thus they are not true natural experiments (though that doesn't mean they're useless!). 

Also, in many fields of econ, you now see randomized controlled trials. These are especially popular in development econ and in education policy econ. In macro, doing an RCT is not just prohibitively difficult, but ethically dubious as well.


So there we have three big - but not hard-and-fast - differences between macro and micro methods. Note that they all have to do with macro being "big" in some way - either lots of actors (#1), shocks that affect lots of people (#2), or lots of confounds (#3). As I see it, these differences explain why definitive answers are less common in macro than elsewhere - and why macro is therefore more naturally vulnerable to fads, groupthink, politicization, and the disproportionate influence of people with forceful, aggressive personalities.

Of course, the boundary is blurry, and it might be getting blurrier. I've been hearing about more and more people working on "macro-focused micro," i.e. trying to understand the sources of shocks and frictions instead of simply modeling the response of the economy to those shocks and frictions. The first time I heard that exact phrase was in connection with this paper by Decker et al. on business dynamism. Another example might be the people who try to look at price changes to tell how much sticky prices matter. Another might be studies of differences in labor market outcomes between different types of workers during recessions. I'd say the study of bubbles in finance also qualifies. This kind of thing isn't new, and it will never totally replace the need for "big" macro methods, but hopefully more people will work on this sort of thing now (and hopefully they'll continue to take market share from "yet another DSGE business cycle model" type papers at macro conferences). As "macro-focused micro" becomes more common, things like game theory, partial equilibrium, cross-sectional analysis, natural experiments, and even RCTs may become more common tools in the quest to understand business cycles and growth. 

Monday, May 16, 2016

How bigoted are Trump supporters?


Jason McDaniel and Sean McElwee have been doing great work analyzing the political movement behind Donald Trump. For example, they've showed pretty conclusively that Trump support is driven at least in part by what they call "racial resentment" - the notion that the government unfairly helps nonwhites.

But "racial resentment" is not the same thing as outright bigotry. Believing that the government unfairly helps black people doesn't necessarily mean you dislike black people. So McDaniel and McElwee did another survey asking about people's attitudes toward various groups. Here's a graph summarizing their basic findings:



So, from this graph, I gather:

1. Trump supporters, on average, say they like Blacks, Hispanics, Scientists, Whites, and Police. On average, they say they dislike Muslims, Transgenders, Gays, and Feminists.

2. Trump supporters, on average, say they like Whites a bit more than average, Muslims a lot less, and Transgenders a bit less. They also might say they like Hispanics, Gays, and Feminists somewhat less, though the statistical significance is borderline.

Now here's how Sean McElwee interpreted this same graph:



This interpretation doesn't appear to be supported by Sean's own data. In fact, his data appear to support the opposite of what he claims.

Now, the main caveat to all this is that surveys like this almost certainly don't do a good job of measuring people's real attitudes toward other groups. When someone calls you on the phone (or hands you a piece of paper) and asks you if you like Hispanics, whether you say "yes" or "no" is probably much more dependent on what you think you ought to say than what you really feel. So this survey is probably mainly just measuring differences in how Trump supporters feel they ought to answer surveys.

But even if this survey really did measure people's true attitudes, it still wouldn't tell us what Sean claims it does. Trump supporters, overall, say they like Blacks. And the degree to which they say they like Blacks is not statistically significant from the national average. Only when it comes to Muslims and Transgender folks do Trump supporters appear clearly more bigoted than the national average.

But going back to the main problem with surveys like this, it might be that Trump supporters are simply more willing to express their dislike of Muslims and Transgender people in a survey. This may just reflect their general lack of education. More educated people are plugged into the mass media culture, which generally discourages overt expressions of bigotry toward any group. Less educated folks are less likely to have gotten the message that you're not supposed to say bad things about Muslims and Trans people.

So in conclusion, this survey doesn't seem to support the narrative that Trump supporters are driven by bigotry. That narrative might still be true, of course - there are certainly some very loud and visible bigots within Trump's support base (and within his organization). But after looking at this data, my priors, which were pretty ambivalent about that narrative to begin with, haven't really been moved at all.

Sunday, May 15, 2016

Russ Roberts on politicization, humility, and evidence


The Wall Street Journal has a very interesting interview with Russ Roberts about economics and politicization. Lots of good stuff in there, and one thing I disagree with. Let's go through it piece by piece!

1. Russ complains about politicization of macroeconomic projections:
He cites the Congressional Budget Office reports calculating the effect of the stimulus package...The CBO gnomes simply went back to their earlier stimulus prediction and plugged the latest figures into the model. “They had of course forecast the number of jobs that the stimulus would create based on the amount of spending,” Mr. Roberts says. “They just redid the estimate. They just redid the forecast."
I wouldn't be quite so hard on the CBO. It's their job to forecast the effect of policy. They have to choose a model in order to do that. And it's their job to evaluate the impact of policy. They have to choose a model to do that. And of course they're going to choose the same model, even if that makes the evaluation job just a repeat of the forecasting job. I do wish, however, that the CBO would try some various alternative models, and show the differing estimates for the different models. That would be better than what they currently do.

I think a better example of politicization of policy projections was given not by Russ, but by Kyle Peterson, who wrote up the interview for the WSJ. Peterson cited Gerald Friedman's projection of the impact of Bernie Sanders' spending plans. Friedman also could have incorporated model uncertainty, and explored the sensitivity of his projections to his key modeling assumptions. And unlike the CBO, he didn't have a deadline, and no one made him come up with a single point estimate to feed to the media. And some of the people who defended Friedman's paper from criticism definitely turned it into a political issue

So I think Russ is on point here. There's lots of politicization of policy projections.


2. Peterson (the interviewer) cites a recent survey by Haidt and Randazzo, showing politicization of economists' policy views. This is really interesting. Similar surveys I've seen in the past haven't shown a lot of politicization. A more rigorous analysis found a statistically significant amount of politicization, though the size of the effect didn't look that large to me. So I'd like to see the numbers Haidt and Randazzo get. Anyway, it's an interesting ongoing debate.


3. Russ highlights the continuing intellectual stalemate in macroeconomics:
The old saw in science is that progress comes one funeral at a time, as disciples of old theories die off. Economics doesn’t work that way. “There’s still Keynesians. There’s still monetarists. There’s still Austrians. Still arguing about it. And the worst part to me is that everybody looks at the other side and goes ‘What a moron!’ ” Mr. Roberts says. “That’s not how you debate science.”
Russ is right. But it's very important to draw a distinction between macroeconomics and other fields here. The main difference isn't in the methods used (although there are some differences there too), it's in the type of data used to validate the models. Unlike most econ fields, macro relies mostly on time-series and cross-country data, both of which are notoriously unreliable. And it's very hard, if not impossible, to find natural experiments in macro. That's why none of the main "schools" of macro thought have been killed off yet. In other areas of econ, there's much more data-driven consensus, especially recently. 

I think it's important to always make this distinction in the media. Macro is econ's glamour division, unfortunately, so it's important to remind people that the bulk of econ is in a very different place.


4. Russ makes a great point about econ and the media:
If economists can’t even agree about the past, why are they so eager to predict the future? “All the incentives push us toward overconfidence and to ignore humility—to ignore the buts and the what-ifs and the caveats,” Mr. Roberts says. “You want to be on the front page of The Wall Street Journal? Of course you do. So you make a bold claim.” Being a skeptic gets you on page A9.
Absolutely right. The media usually hypes bold claims. It also likes to report arguments, even where none should exist. This is known as "opinions on the shape of the Earth differ" journalism. This happens in fields like physics - people love to write articles with headlines like "Do we need to rewrite general relativity?". But in physics that's harmless and fun, because the people who make GPS systems are going to keep on using general relativity. In econ, it might not be so harmless, because policy is probably more influenced by public opinion, and public opinion can be swayed by the news.


5. Russ makes another good point about specification search:
Modern computers spit out statistical regressions so fast that researchers can fit some conclusion around whatever figures they happen to have. “When you run lots of regressions instead of just doing one, the assumptions of classical statistics don’t hold anymore,” Mr. Roberts says. “If there’s a 1 in 20 chance you’ll find something by pure randomness, and you run 20 regressions, you can find one—and you’ll convince yourself that that’s the one that’s true.”...“You don’t know how many times I did statistical analysis desperately trying to find an effect,” Mr. Roberts says. “Because if I didn’t find an effect I tossed the paper in the garbage.”
Yep. This is a big problem, and probably a lot bigger than in the past, thanks to technology. Most of science, not just econ, is grappling with this problem. It's not just social science, either - bio is having similar issues


6. Russ calls for more humility on the part of economists:
Roberts is saying that economists ought to be humble about what they know—and forthright about what they don’t...When the White House calls to ask how many jobs its agenda will create, what should the humble economist say? “One answer,” Mr. Roberts suggests, “is to say, ‘Well we can’t answer those questions. But here are some things we think could happen, and here’s our best guess of what the likelihood is.” That wouldn’t lend itself to partisan point-scoring. The advantage is it might be honest.
I agree completely. People are really good at understanding point estimates, but bad at understanding confidence intervals, and really bad at understanding confidence intervals that arise from model uncertainty. "Humility" is just a way of saying that economists should express more uncertainty in public pronouncements, even if their political ideologies push them toward presenting an attitude of confident certainty. A "one-handed economist" is exactly what we have too much of these days. Dang it, Harry Truman!


7. Russ does say one thing I disagree with pretty strongly:
Economists also look for natural experiments—instances when some variable is changed by an external event. A famous example is the 1990 study concluding that the influx of Cubans from the Mariel boatlift didn’t hurt prospects for Miami’s native workers. Yet researchers still must make subjective choices, such as which cities to use as a control group. 
Harvard’s George Borjas re-examined the Mariel data last year and insisted that the original findings were wrong. Then Giovanni Peri and Vasil Yasenov of the University of California, Davis retorted that Mr. Borjas’s rebuttal was flawed. The war of attrition continues. To Mr. Roberts, this indicates something deeper than detached analysis at work. “There’s no way George Borjas or Peri are going to do a study and find the opposite of what they found over the last 10 years,” he says. “It’s just not going to happen. Doesn’t happen. That’s not a knock on them.”
It might be fun and eyeball-grabbing to report that "opinions on the shape of the Earth differ," but that doesn't mean it's a good thing. Yes, it's always possible to find That One Guy who loudly and consistently disagrees with the empirical consensus. That doesn't mean there's no consensus. In the case of immigration, That One Guy is Borjas, but just because he's outspoken and consistent doesn't mean that we need to give his opinion or his papers anywhere close to the same weight we give to the copious researchers and studies who find the opposite. 


Anyway, it's a great interview write-up, and I'd like to see the full transcript. Overall, I'm in agreement with Russ, but I'll continue to try to convince him of the power of empirical research!

Friday, May 13, 2016

Review: Ben Bernanke's "The Courage to Act"


I wrote a review of Ben Bernanke's book, The Courage to Act, for the Council on Foreign Relations. Here's an excerpt:
Basically, Bernanke wants the world to understand why he did what he did, and in order to understand we have to know everything.  
And the book succeeds. Those who are willing to wade through 600 pages of history, and who know something about the economic theories and the political actors involved, will come away from this book thinking that Ben Bernanke is a good guy who did a good job in a tight spot. 
But along the way, the book reveals a lot more than that. The most interesting lessons of The Courage to Act are not about Bernanke himself, but about the system in which he operated. The key revelation is that the way that the U.S. deals with macroeconomic challenges, and with monetary policy, is fundamentally flawed. In both academia and in politics, old ideas and prejudices are firmly entrenched, and not even the disasters of crisis and depression were enough to dislodge them.
The main points I make in the review are:

1. Bernanke was the right person in the right place at the right time. He was almost providentially well-suited to the task of steering America through both the financial crisis and the Great Recession that followed. A lot of that had to do with his unwillingness to downplay the significance of the Great Depression (as Robert Lucas and others did), and with his unwillingness to ignore the financial sector (as other New Keynesians did).

2. However, the institutional, cultural, and intellectual barriers against easy monetary policy that were created in the 1980s, as a reaction the inflation of the 70s, held firm, preventing Bernanke and the Fed from taking more dramatic steps to boost employment, and preventing a thorough rethink of conventional macroeconomic wisdom.

3. Fiscal Keynesianism, however, has also survived, despite generations of efforts by monetarists, New Classicals, Austrians, and others to kill it off. Deep down, Americans still believe that stimulus works.

4. The political radicalism of the Republican party was a big impediment to Bernanke's efforts to revive the economy. Anti-Fed populism, from both the right (Ron Paul) and the left (Bernie Sanders) also interfered with the goal of putting Americans back to work.


You can read the whole thing here!

Michael Strain and James Kwak debate Econ 101


Very interesting debate over Econ 101, between Michael Strain and James Kwak. Strain attempts to defend Econ 101 from the likes of Paul Krugman and Yours Truly. He especially criticizes my call for more empirics in 101:
Critics suggest that introductory textbooks should emphasize empirical studies over these models. There are many problems with this suggestion, not the least of which that economists’ empirical studies don’t agree on many important policy issues. For example, it is ridiculous to suggest that economists have reached consensus that raising the minimum wage won’t reduce employment. Some studies find non-trivial employment losses; others don’t. The debates often hinge on one’s preferred statistical methods. And deciding which methods you prefer is way beyond the scope of an introductory course. 
As you might predict, I have some problems with this. First of all, I don't like the idea that if the empirics aren't conclusively settled, we should just teach theories and forget about the facts. I agree with Kwak, who writes:
I don’t understand this argument. The minimum wage may or may not increase unemployment, depending on a host of other factors. The fact that economists don’t agree reflects the messiness of the world. That’s a feature, not a bug.
Totally! This clearly seems like the intellectually honest thing to do. It seems bad to give kids too strong of a false sense of certainty about the way the world works. When a debate is unresolved, I think you shouldn't simply ignore the evidence in favor of a theory that supports one side of the debate.

As a side note, I think the evidence on short-term employment effects of minimum wage is more conclusive than Strain believes, though also more nuanced than is often reported in the media and in casual discussions.

Strain also writes this, which I disagree with even more:
Even more problematic, some of the empirical research most celebrated by critics of economics 101 contradicts itself about the basic structure of the labor market. The famous “Mariel boatlift paper” finds that a large increase in immigrant workers doesn’t lower the wages of native workers. The famous “New Jersey-Pennsylvania minimum wage paper” finds that an increase in the minimum wage doesn’t reduce employment. If labor supply increases and wages stay constant — the Mariel paper — then the labor demand curve must be flat. But if the minimum wage increases and employment stays constant — New Jersey-Pennsylvania — then the labor demand curve must be vertical. Reconciling these studies is, again, way beyond the scope of an intro course. (emphasis mine)
Strain is using the simplest, most basic Econ 101 theory - a single S-D graph applying to all labor markets - to try to understand multiple results at once. He finds that this super-simple theory can't simultaneously explain two different empirical stylized facts, and concludes that we should respond by not teaching intro students about one or both of the empirical stylized facts.

But what if super-simple theory is just not powerful enough to describe both these situations at once? What if there isn't just one labor demand curve that applies to all labor markets at once? Maybe in the case of minimum wage, monopsony models are better than good old supply-and-demand. Maybe in the case of immigration, general equilibrium effects are important. Maybe search frictions are a big deal. There are lots of other possibilities too.

Strain's implicit assumption - that there's just one labor demand curve - seems like an example of what I call "101ism". A good 101 class, in my opinion, should teach monopoly models, and at least give a brief mention of general equilibrium and search frictions. And even more importantly, a good 101 class should stress that models are situational tools, not Theories of Everything. Assuming that there's one single labor demand curve that applies to all labor markets is a way of taking a simple model and trying to make it function as a Theory of Everything; no one should be surprised when that attempt fails. And our response to that failure shouldn't be to just not teach the empirics. It should be to rethink the way we use the theory.

Anyway, I agree with what Kwak says here:
People like Krugman and Smith (and me) aren’t saying that Economics 101 is useless; we all think that it teaches some incredibly useful analytical tools. The problem is that many people believe (or act as if they believe) that those models are a complete description of reality from which you can draw policy conclusions [without looking at evidence].
Exactly.

Sunday, May 08, 2016

Regulation and growth


As long as we're on the topic of regulation and growth, check out this post I recently wrote for Bloomberg View:
I’m very sympathetic to the idea that regulation holds back growth. It’s easy to look around and find examples of regulations that protect incumbent businesses at the expense of the consumer -- for example, the laws that forbid car companies from selling directly to consumers, creating a vast industry of middlemen. You can also find clear examples of careless bureaucratic overreach and inertia, like the total ban on sonic booms over the U.S. and its territorial water (as opposed to noise limits). These inefficient constraints on perfectly healthy economic activity must reduce the size of our economy by some amount, acting like sand in the gears of productive activity. 
The question is how much...If regulation is less harmful than the free-marketers would have us believe, we risk concentrating our attention and effort on a red herring... 
[F]ocusing too much on deregulation might actually hurt our economy. Many government rules, such as prohibitions on pollution, tainted meat, false advertising or abusive labor practices, are things that the public would probably like to keep in place. And reckless deregulation, like the loosening of restrictions on the financial industry in the period before the 2008 credit crisis, can hurt economic growth in ways not captured by most economic models. Although burdensome regulation is certainly a worry, a sensible approach would be to proceed cautiously, focusing on the most obviously useless and harmful regulations first (this is the approach championed by my Bloomberg View colleague Cass Sunstein). We don’t necessarily want to use a flamethrower just to cut a bit of red tape.

Also, on Twitter I wrote a "tweetstorm" (series of threaded tweets) about the regulation debate. Here are the tweets:















The regulation issue is really a very multifaceted, complex, and important series of different issue. It's an important area of policy debate, but can't be boiled down to one simple graph - and shouldn't be boiled down to one simple slogan.

Friday, May 06, 2016

Brad DeLong pulpifies a Cochrane graph


When Bob Lucas, Tom Sargent, and Ed Prescott remade macroeconomics in the 70s and 80s, what they were rebelling against was reduced-form macro. So you think you have a "law" about how GDP affects consumption? You had better be able to justify that with an optimization problem, said Lucas et al. Otherwise, your "law" is liable to break down the minute you try to take advantage of it with government policy.

Lots of people are unhappy with what Lucas et al. invented to replace the "old macro". But few would argue that it needed replacing. Identifying correlations in aggregate data really doesn't tell you a lot about what you can accomplish with policy.

Because of this, I've always been highly skeptical of John Cochrane's claim that if we simply launched a massive deregulatory effort, it would make us many times richer than we are today. Cochrane typically shows a graph of the World Bank's "ease of doing business" rankings vs. GDP, and claims that this graph essentially represents a menu of policy options - that if we boost our World Bank ranking slightly past the (totally hypothetical) "frontier", we can make our country five times as rich as it currently is. This always seemed like the exact same fallacy that Lucas et al. pointed out with respect to the Phillips Curve. 

You can't just do a simple curve-fitting exercise and use it to make vast, sweeping changes to national policy. 

Brad DeLong, however, has done me one better. In a short yet magisterial blog post, DeLong shows that even if Cochrane is right that countries can move freely around the World Bank ranking graph, the policy conclusions are incredibly sensitive to the choice of functional form. 

Here is Cochrane's graph, unpacked from its log form so you can see how speculative it really is:


DeLong notes that this looks more than a little bit crazy, and decides to do his own curve-fitting exercise (which for some reason he buries at the bottom of his post). Instead of a linear model for log GDP, he fits a quadratic polynomial, a cubic polynomial, and a quartic polynomial. Here's what he gets:


Cochrane's conclusion disappears entirely! As soon as you add even a little curvature to the function, the data tell us that the U.S. is actually at or very near the optimal policy frontier. DeLong also posts his R code in case you want to play with it yourself. This is a dramatic pulpification of a type rarely seen these days. (And Greg Mankiw gets caught in the blast wave.)

DeLong shows that even if Cochrane is right that we can use his curve like macroeconomists thought we could use the Phillips Curve back in 1970, he's almost certainly using the wrong curve. You'd think Cochrane would care about this possibility enough to at least play around with slightly different functional forms before declaring in the Wall Street Journal that we can boost our per capita income to $400,000 per person by launching an all-out attack on the regulatory state. I mean, how much effort does it take? Not much.

And this is an important issue. An all-out attack on the regulatory state would inevitably destroy many regulations that have a net social benefit. The cost would be high. Economists shouldn't bend over backwards to try to show that the benefits would be even higher. That's just not good policy advice.

(Also, on a semi-related note, Cochrane's WSJ op-ed (paywalled) uses China's nominal growth as a measure of the rise in China's standard of living. That's just not right - he should have used real growth. If that's just an oversight, it should be corrected.)


Updates

Cochrane responds to DeLong. His basic responses are 1) drawing plots with log GDP is perfectly fine, and 2) communist regimes like North Korea prove that the relationship between regulation and growth is causal.

Point 1 is right. Log GDP on the y-axis might mislead 3 or 4 people out there, but those are people who have probably been so very misled by so very many things that this isn't going to make a difference.

Point 2 is not really right. Sure, if you go around shooting businesspeople with submachine guns, you can tank GDP by making it really hard to do business. No one doubts that. But that's a far, far, far cry from being able to boost GDP to $400k per person by slashing regulation and taxes. Cochrane's problem isn't just causality, it's out-of-sample extrapolation. DeLong shows that if you fit a cubic or quartic polynomial to the World Bank data, you find that too much "ease of doing business" is actually bad for your economy, and doing what Cochrane suggests would reduce our GDP substantially. Is that really true? Who knows. Really, what this exercise shows is that curve-fitting-and-extrapolation exercises like the one Cochrane does in the WSJ are silly sauce.

Anyway, if you're interested to read more stuff I wrote about regulation and growth, see this post.

Monday, April 25, 2016

Life Update: Leaving Stony Brook, joining Bloomberg View


Short version: I've joined Bloomberg View as a full-time writer. I'm leaving Stony Brook, and leaving academia, effective August 15, 2016. Bloomberg View had approached me a year ago about possibly working for them full-time. So I took a 1-year leave from Stony Brook, partially to finish some research stuff I had to do, and partially to decide whether I should switch jobs (I retained my Stony Brook affiliation during that time, and kept advising students and working with Stony Brook professors, but didn't teach classes). Early this year, Bloomberg made me a very nice offer for a full-time position, and I decided to take it. The offer included the chance to live in the San Francisco Bay Area, where I've long wanted to live, so I've moved to SF.

Longer version: Back in 2006, the original reason I thought of getting an econ PhD was actually to become an econ pundit and writer. I saw the quality of the econ commentary out there, and decided that it could be much improved - that there was a huge breakdown in the pipeline of good and useful ideas between academia and the public debate. I admired economists like Brad DeLong and Paul Krugman, and writers like Matt Yglesias, who took some steps to bridge that gap, but I thought that much more needed to be done. I wanted to make sure that good ideas, rather than politically motivated propaganda or silly oversimplifications, made it out of the ivory tower and into the public consciousness.

As soon as I started grad school, I essentially forgot that dream entirely. I got absorbed in the grad school stuff, first in macroeconomics, then later in my dissertation and behavioral finance (which was much more fun and satisfying than macro). And I enjoyed the academic life at Stony Brook, especially the people there. Now, with this Bloomberg job, I'll sadly be leaving that behind - but in the end, I came right back around to where I started. In terms of bringing good ideas from academia into the public consciousness, progress has been made, but much remains to be done. Fortunately, Bloomberg is a great platform to do this, and I'll be working with some great econ writers like Narayana Kocherlakota, Justin Fox, and many others.

As for Stony Brook, I will miss everyone there. It's a good, fast-growing department. The behavioral finance group there is strong and growing, with Danling Jiang, Stefan Zeisberger, and others. The people in charge of the College of Business, including the dean, Manuel London, are really excellent leaders, and the department is much friendlier and less politics-ridden than basically any other I've seen. They'll be hiring my replacement soon, so if you're a job candidate in behavioral finance, and you'd like to live in New York, I'd recommend Stony Brook.

Anyway, to you grad students out there, I certainly wouldn't regard me as a role model - my career is weird and unusual, and was probably always destined to be that way. I'd still recommend the economics PhD, and the life of an economist, to a whole lot of people out there.

That's all that's changed. I'll still be blogging here, and I'll still be around on Twitter!

Sunday, April 24, 2016

Baseline models


Your macro diss of the day comes via Kevin Grier. Grier is responding to a blog post where David Andolfatto uses a simple macro model to think about interest rates and aggregate demand. Kevin, employing a somewhat tongue-in-cheek tone, criticized David's choice of model:
OK, everybody got that. Representative agent? check. Perfect capital markets? check, lifetime income fixed and known with certainty? check. Time-separable preferences? check. 
AAARRRRGGGGHHHHH!! 
People, it would be one thing if models like this fit the data, but they don't. 
The consumption CAPM is not an accurate predictor of asset prices, The degree of risk aversion required to make the numbers work in the equity premium puzzle is something on the order of 25 or above, the literature is littered with papers rejecting [the Permanent Income Hypothesis]. 
So we are being harangued by a model that is unrealistic in the theory and inaccurate to the extreme in its predictions. 
And that's pretty much modern macro in a freakin' nutshell.
Mamba out. 
Kevin is saying that if simple models of this type - models with representative agents, perfect capital markets, deterministic income, and time-separable preferences - haven't performed well empirically, we shouldn't use them to think about macro questions, even in a casual setting like a blog post.

I think Kevin is basically right about the GIGO part. Bad models lead to bad thinking.

A defender of David's approach might say that this model is just a first-pass approximation, good for a first-pass analysis. That even if simple models like this can't solve the Equity Premium Puzzle or predict all of people's consumption behavior, they're good enough for thinking about monetary policy in a casual way.

But I don't think I'd buy that argument. We know that heterogeneity can change the results of monetary policy models a lot. We know incomplete markets can also change things a lot, in different ways. And I think it's pretty well-established that stochasticity and aggregate risk can change monetary policy a lot.

So by using a representative-agent, perfect-foresight, complete-markets model, David is ignoring a bunch of things that we know can totally change the answers to the exact policy questions David is thinking about.

So what should we do instead? One problem is that models with things like heterogeneity, stochasticity, and imperfect markets are a lot more complicated, and therefore harder to apply in quick or casual way. If we insist on using models with those elements, then it's going to be very hard to write blog posts thinking through monetary policy issues in a formal way. Maybe that's just the sad truth.

Another problem is that we don't really know that models with things like heterogeneity, stochasticity, and imperfect markets are going to be much better. Most of these models can match a couple features of the data, but as of right now there's no macro model in existence that matches most or all of the stylized facts about business cycles, finance, consumption, etc. 

So it might be a choice between using A) a simple model that we know doesn't work very well, and B) a complicated model that we know doesn't work very well. Again, the best choice might be just to throw up our hands and stop using formal models to think casually about monetary policy.

Kevin also says that "being harangued by a model that is unrealistic in the theory and inaccurate to the extreme in its predictions" is "pretty much modern macro in a freakin' nutshell." Is that true? 

Actually, I'd say it's more of a problem in fields like international finance, asset pricing, and labor that try to incorporate macro models into their own papers. Usually, in my experience, they pick a basic RBC-type model, because it's easy to use. They then add their own elements, like labor search, financial assets, or multiple countries. But since the basic foundation is a macro model that doesn't even work well for the purpose it was originally conceived for (explaining the business cycle), the whole enterprise is probably doomed from the start.

In the core macro field, though, I think there's a recognition that simple models don't work, and an active search for better ones. From what I've personally seen, most leading macroeconomists are also pretty cautious and circumspect when they give advice to policymakers directly, and don't rely too strongly on any one model.