Já falei ali em baixo sobre o soul-searching a que muitos macroeconomistas se têm dedicado nos últimos anos, depois de constatarem fertilidade limitada do uso de DSGE’s como programa de investigação. Mas há debates menos herméticos do que esse a correr em paralelo. Como este, acerca da interacção entre teoria e investigação empírica: How should theory and evidence relate to each other? (Noah Smith)
Without a structural model, empirical results are only locally valid. And you don’t really know how local “local” is. If you find that raising the minimum wage from $10 to $12 doesn’t reduce employment much in Seattle, what does that really tell you about what would happen if you raised it from $10 to $15 in Baltimore?
That’s a good reason to want a good structural model. With a good structural model, you can predict the effects of policies far away from the current state of the world.
In lots of sciences, it seems like that’s exactly how structural models get used. If you want to predict how the climate will respond to an increase in CO2, you use a structural, microfounded climate model based on physics, not a simple linear model based on some quasi-experiment like a volcanic eruption. If you want to predict how fish populations will respond to an increase in pollutants, you use a structural, microfounded model based on ecology, biology, and chemistry, not a simple linear model based on some quasi-experiment like a past pollution episode.
That doesn’t mean you don’t do the quasi-experimental studies, of course. You do them in order to check to make sure your structural models are good. If the structural climate model gets a volcanic eruption wrong, you know you have to go back and reexamine the model. If the structural ecological model gets a pollution episode wrong, you know you have to rethink the model’s assumptions. And so on.
(…)
Economics could, in principle, do the exact same thing. Suppose you want to predict the effects of labor policies like minimum wages, liberalization of migration, overtime rules, etc. You could make structural models, with things like search, general equilibrium, on-the-job learning, job ladders, consumption-leisure complementarities, wage bargaining, or whatever you like. Then you could check to make sure that the models agreed with the results of quasi-experimental studies – in other words, that they correctly predicted the results of minimum wage hikes, new overtime rules, or surges of immigration. Those structural models that failed to get the natural experiments wrong would be considered unfit for use, while those that got the natural experiments right would stay on the list of usable models. As time goes on, more and more natural experiments will shrink the set of usable models, while methodological innovations enlarges the set.
But in practice, I think what often happens in econ is more like the following:
1. Some papers make structural models, observe that these models can fit (or sort-of fit) a couple of stylized facts, and call it a day. Economists who like these theories (based on intuition, plausibility, or the fact that their dissertation adviser made the model) then use them for policy predictions forever after, without ever checking them rigorously against empirical evidence.
2. Other papers do purely empirical work, using simple linear models. Economists then use these linear models to make policy predictions (“Minimum wages don’t have significant disemployment effects”).
3. A third group of papers do empirical work, observe the results, and then make one structural model per paper to “explain” the empirical result they just found. These models are generally never used or seen again.
A lot of young, smart economists trying to make it in the academic world these days seem to write papers that fall into Group 3. This seems true in macro, at least, as Ricardo Reis shows in a recent essay. Reis worries that many of the theory sections that young smart economists are tacking on to the end of fundamentally empirical papers are actually pointless
(…)
It’s easy to see this pro-forma model-making as a sort of conformity signaling – young, empirically-minded economists going the extra mile to prove that they don’t think the work of the older “theory generation” (who are now their advisers, reviewers, editors and senior colleagues) was for naught.
But what is the result of all this pro-forma model-making? To some degree it’s just a waste of time and effort, generating models that will never actually be used for anything. It might also contribute to the “chameleon” problem, by giving policy advisers an effectively infinite set of models to pick and choose from.
And most worryingly, it might block smart young empirically-minded economists from using structural models the way other scientists do – i.e., from trying to make models with consistently good out-of-sample predictive power. If model-making becomes a pro-forma exercise you do at the end of your empirical paper, models eventually become a joke. Ironically, old folks’ insistence on constant use of theory could end up devaluing it.
(…)
In other words, econ seems too focused on “theory vs. evidence” instead of using the two in conjunction. And when they do get used in conjunction, it’s often in a tacked-on, pro-forma sort of way, without a real meaningful interplay between the two. Of course, this is just my own limited experience, and there are whole fields – industrial organization, environmental economics, trade – that I have relatively limited contact with. So I could be over-generalizing. Nevertheless, I see very few economists explicitly calling for the kind of “combined approach” to modeling that exists in other sciences – i.e., using evidence to continuously restrict the set of usable models.