Answer PLSC COMP 2

'''Discuss advantage and disadvantages of the EITM approach. Describe a particular theory in CP and discuss how using this approach affects generalization and validation.'''

History and Definition of EITM
The Empirical Implications of Theoretical Models (EITM) program was and continues to be an attempt to bridge the professional and scholarly gap between social scientists working in empirical and formal arenas. The former generally but not exclusively refers to social scientists who inductively make extensive use of applied statistical methods to test hypotheses; the latter use deductive mathematical models and derive insights from equilibria solutions, comparative statics, and the like.

The difference in approach split the profession. Scholars tended to work (and to train graduate students) in nearly exclusively one approach, giving short shrift to the other. The result was polarizing. Too often scholars in each camp would trade barbs, with empiricists decrying the purely theoretical nature of formal models, and formal modelers disparaging the atheoretic work of empiricists (especially quantitative empiricists). Consider, for example, Green and Shapiro's (1994) criticism that too few rational choice studies has yielded to "serious empirical scrutiny and survived."

Morton (1999) talks about "political science's dilemma" as the confluence of "strong methods without theory" and "strong theory without data." The roots of the dilemma and of the divide, according to Morton, grew in the behavioral revolution--the explosion of data and ever more complex statistical analyses of those data. The data-rich environment offered the promise of conducting political science research in a manner that emphasized empirically testable theories; however, Morton argues that the theoretical side of that approach was supplanted. Clarke and Primo agree that what gained prominence was model testing, rather than theory testing (2007).

The EITM movement, with significant encouragement and funding from the National Science Foundation, attempted to unite the two approaches within one cohesive and complete research approach. The goal was to unite the rigid, rigorous, and explanation rich formal models, with observable, testable, and inductively analyzable empiricism.

Elster (2007) drives home the point regarding explanation, and why statistical analyses are "second-best." Applied statistical analyses, on their own, provide evidence of correlation. The mechanism, the black box explanation for why we obtain the results from the observed data, remains unknown unless a theoretical explanation is provided. In order to have more carefully articulated theory, formal specifications are used, assumptions are made explicit, and .....

Assumptions of EITM (Is it all Rational Choice?)
Although formal models are often equated with Rational Choice models, there is ample room for diversity of formal approaches. Morton (1999) provides a typology of models that range from expected utility theory, standard game theory, behavioral game theory, bounded rationality and prospect theory, and the like. This diversity allows scholars to choose a model that serves the question or purpose that drives the research (Clarke and Primo 2007).

General Applications in the CP (and American) literature
To demonstrate the problems with non-empirically tested theoretical models, consider Downs (1957)(Economic Theory of democracy). The rational voter should never vote. Downs summary. Empiricism says this is not an accurate model of voting behavior. While Downs' failed theory has lead to much more investigation into voting behavior (including Riker and Ordershook's 1970 article that invented the D Term), the purely theoretical approach would be, without more, of very limited use. It is the infusion of empiricism into the theoretical development that pays dividends.

Aldrich (1995)(Why Parties) is a rational choice based explanation of the emergence and development of parties in an American context. He argues that parties are endogenous institutions that are used, shaped, and remade to fit the needs of office seeking public officials. Why Parties Summary

Cox and McCubbins (1993)(Legislative Leviathan) offers an answer to the question Why are House institutions organized as they are? Their answer is the foundation of Cartel Theory: parties can be thought of as legislative cartels that are used to solve collective action problems. The micro-foundations of their argument lie in Rational Choice because they claim that any theory of parties must start with office seekers motivated by the election/reelection goal. Legislative Leviathan Summary

Wintrobe (1998)(Political Economy of Dictatorship) tends more to the formal, and less to the empirical analysis of dictatorship. However, despite the lack of applied statistics, he does create those four typologies of dictators that precipitate from a rational choice approach to repression and extortion of resources.

The entire enterprise of the Analytic Narrative (Bates et al. 1998) is an EITM approach. Each of the works in the volume takes a Rational Choice analytic framework and tries to get empirical leverage (to spite Green and Shapiro, it seems) by using the narrative form of history.

Advantages
The concurrent application of empirical and formal approaches to political questions promises a more complete research program that suffers neither from a lack of skilled empirical investigation nor from a lack of causal mechanism specification.


 * Consequences of choices made by several people together may not be obvious and modeling them helps, especially through formal modeling (Wagner 2007) for instance, prisoner's dilemma.
 * Helps understand the problem of uncertainty and incomplete information.
 * Counter-intuitive results. point predictions that may not work in real life - groups forming, voting, the Tullock's dilemma - can spur newer research - more questions
 * Can model unobservables - strategic interaction that non-formal models cannot easily do.
 * Great to model interdependent decision.
 * Can evaluate the assumptions since they are explicit

Disadvantages
Though the EITM approach may be a noble goal, the demands of such an approach are substantial and consequential. Because the EITM agenda essential is a matter of methodology, a central disadvantage arises from the matter of training and skill acquisition. A 2002 NSF report recognizes that the technical-analytic proficiency required to unite the two approaches is great.

The question that ultimately arises, but that no department has sufficiently answered, is how to structure a doctoral program that produces scholars who are well equipped to conduct EITM work. Funding, program length, undergraduate technical training and instruction in what graduate level political science requires are all obstacles that must be addressed if the EITM approach is adopted on a discipline-wide level.

Beyond these sweeping disadvantages to an EITM approach within the discipline, studies seeking to use EITM methods face an uphill battle. As Green and Shapiro note, "rational choice hypotheses are too often formulated in ways that are inherently resistant to genuine empirical testing." If by assumption we abstract away too much of reality, then there will be little to nothing that has empirical implications. Because rational choice foundations can at times make assumptions that border on the heroic, Green and Shapiro's criticism must be taken into account. This does not damn the EITM agenda, but it does call for very careful construction of theory with an eye towards an eventual empirical test.

de Marchi (2005) suggests that an appropriate way to evaluate the empirical implications of a theoretical model is to ask the following questions:

1)     What are the assumptions and/ or parameters of the model?  Do the assumptions spring from a consideration of the problem itself, or are they unrelated to the main logic of the model, chosen arbitrarily, perhaps solely to make derivations possible? Are values chosen for parameters derived from qualitative or quantitative empirical research, or are they chosen arbitrarily, maybe for convenience?  How many of the parameters are open, to be “filled in” by data analysis? this is similar to Clarke and Primo (2004) and Morton's (1996) stress on testing the assumption and parameters of the models rather than  testing the predictions. Testing predictions while will not give us any information about the model itself (Clarke & Primo 2004), they can tell us what is not feasible in real life and whether our point predictions if any, are feasible. 2)      How much is the model immune to small perturbations of the parameters? 3)     Is the model a toy model such as the prisoner's dilemma? While “toy” models are helpful in getting the intuition about the game, they are difficult to falsify. 4)      Are the results of the model verified by out-of-sample tests? Are there alternatives to a large-N statistical approach that tests the model directly? - Analytic Narratives?


 * Criticisms
 * Multi-equilibriums - one of the biggest critiques of formal models has been that as researchers have relaxed assumptions and added new assumptions to models, there exist multiple equilibrium i.e. multiple predictions (since equilibriums are predictions of the formal models using game theory). Schelling's focal points, refinement of equilibrium concepts such as Bayesian or sub game perfect, sequential. Here is where the advantage of the EITM comes in - by empirically testing the hypotheses that can be deduced from the theoretical model, we can see what the outcomes would be. do the empirical predictions and the game theoretic predictions align? which of the equilibrium align with the empirical results?
 * Don't know what to do with dis-equilibrium or point equilibria. But as Thelen (1999)argues (w.r.t. to rational choice institutionalists versus historical institutionalists) that rational choice researchers are interested in why empirically we see events that are off the equilibrium path. Modeling and then empirically testing our formal models can help us formulate such research questions. For instance, why do groups form despite collective action problem or why don't parties converge to the center as Downs (1957) would have predicted.
 * Don't always get the empirical data to test predictions. Experiments are good then, problem is that they have to be set in a way that subject are not forced to make decisions - otherwise they are just simulations (Morton 1996)
 * Depends upon the equilibrium concept you use.

Specific theories in CP that might be subject to EITM-style analysis
Alt (reprinted in the 2002 NSF report) talks about trend for questions in comparative politics to deal with macro-level collective behavior, rather than individual behavior. This is not always the case (read: social choice a la Arrow). Alt comments that most of the theory in comparative politics is rational choice theory. Except when you get the culture scholars. Theirs is inherently a macro-level subject.

Affects on generalization and validation
Riker (1989) draws a distinction between generalization and explanation. Riker 1989 Summary The key difference is that "Explanation requires much more convincing support" than generalizations, which only need to prove their predictive ability. The argument is similar to Elster (2007)(recall our last day of discussion in Advanced Formal Theory re: Elster on why statistical models are "second-best").

Translating formal models into empirical models
According to an EITM syllabus by BDM, Calvert and Martin, the source of testable hypotheses within the EITM approach lies in the correspondence between game parameters - payoff parameters and the probabilities if chance moves and the characteristics of the game outcome. the theoretical implications can be then tested via two ways: a) derive robust predictions from the parameter-outcome correspondence (in layman's terms derive equilibrium) and test them using statistical techniques. b) one can also "adopt assumptions about the statistical processes generating the game parameters and use the game theoretic models to specify how these properties can be transformed into observable outcomes. Quantal response models for instance start from specific assumptions about the population distribution of true payoffs, noise, or mistaken choices, and derive the resulting distribution of choice outcomes in a game to specify appropriate statistical tests.