Which Comes First, Theory or Data?

It’s kind of a trick question, exactly the type of gambit that drives both research and blog posts. (The answer, it seems, is “both should magically emerge simultaneously.”)

Anyway…I’ve been in a bit of a funk lately, and not the twerking kind.  Both the seasonal goings-on and my mind doing laps on a vexing problem have left me a bit, ummm, unmotivated to post.  Without further delay, let’s make petroleum jelly out of petroleum…here’s my intellectual/professional quandary/blogging impediment in the form of a blog post.

1. I have a lot of data (BIG data) that I just know is important.  Basically, it is (a big part of) the substance of federal policymaking.

2. I don’t know any theories that really speak to it.

3. Well, I have some, but they are both intractable as presently formulated and I don’t know how best to simplify them to get results.  I have strong hunches about how I could do so, but I’d like to choose the simplifications that are most appropriate for the questions I want to answer.  (In a nutshell, the questions I want to answer are “why are some issues raised and acted upon while others are set aside,” and “how are people and resources deployed across multiple issues at a given point in time?”)

So…what to do?  (If you think about it for a moment, my conundrum is very meta.)

From a “math of math of politics” angle, the real rub is exemplified by the astute question raised by one of my colleagues when I described something I was doing/wanted to do with this big data.

“But what’s the theory?”

I have been told by other colleagues that theory is not necessary—though highly desirable—for empirical social science.  I fundamentally and, if I say so myself, quite correctly disagree with this assertion on theoretical grounds.  (See what I did there?)  But, the more practical rectitude of my assertion that there is no empirical analysis without some kind of theory—in terms of interpreting, publishing, and communicating empirical analysis—is also illustrated by my (empirically focused) colleague’s question.

More “math of math of politics” is raised by the fact that my absolute advantage in terms of scholarly production is in theory (really, modeling), rather than “pure” empirical analysis.  So, maybe I should take a hint and take a leap, “doing the models” that I can do, and letting others sort them out.  After all, I’m tenured, and therefore have the freedom to take the time to do this—the bulk of “the time” in this case (in my expectation, at least) will be navigating what I expect will be a bumpy road to publication and communication of the models.  I foresee plenty of (in-the-weeds) speedbumps in pursuing a “pure theory” approach to the questions I am interested in.

The irony, of course, is that one might think that tenure is at least partly there to motivate me to “take risks” in the sense of really trying to do things right.  In this case, the right path in terms of getting people to listen to the ultimate analysis might involve developing models that, at least now, can not be motivated by empirical verisimilitude.

So, what to do, I ask you.  Most of you are social scientists, and I am honestly befuddled by the proper way to aggregate/trade-off the two competing intellectual incentives: should I patiently, doggedly, and perhaps inefficiently chase the (as now unknown) “right analysis,” or do the analysis that will be more readily heard and may accordingly grease the wheels for ultimate production and communication of the right analysis?

With that, I leave you with this.

There is no Networking without “two” and “work” or, Incentives & Smelt at APSA!

As Labor Day weekend approaches, scores of scholars are steeling themselves for the “networking experience” that is the annual meeting of the American Political Science Association.  Of course, the main value of networking is establishing relationships.  For example, meeting new people can lead to coauthorships, useful information about grants/jobs/conferences, invitations to give talks, and so forth.

Like it or not, networking is important: to be truly successful in social science (and any academic or creative field), your ideas have to reach and influence others, and the constraints of time and attention lead to a variant on “the squeaky wheel gets the grease” in this, and all, professions.  Networking both exposes others to your ideas and, in the best case, helps you generate (sometimes, but not always, in overt cooperation with others) new ones.

All that said, I wanted to make three quick points about what this aspect of the role of networking implies, from a strategic (but not cynical) standpoint, about how one should network.

1. To the degree that one wants to create a relationship through networking, it is better, ceteris paribus, that the relationship have a longer expected duration.  Nobody washes a rented car (see: Breaking Bad), and in terms of dyadic relationships, the length of the relationship is bounded above by the shorter of the two scholars’, ahem, “time horizons.”

2. To the degree that one wants to generate, produce, and publish influential ideas, it is better, ceteris paribus, to create relationships with those who have stronger incentives (e.g., getting a job, getting tenure, being promoted, etc.) than with those who have lower extrinsic incentives to “get stuff out the door.”

3. To the degree that one wants to avoid conflicts of interest in terms of shirking, credit-claiming, and so forth, it is better (as in the repeated prisoners’ dilemma) that both parties have long time horizons so as to increase the (both intrinsic and extrinsic) salience of potential future punishment/comeuppance for transgressions.

All three of these factors suggest that, if you’re a young scholar considering who to spend time with in Chicago in two weeks, don’t forget to meet other young scholars.  Share your ideas, buy a round of smelt, and remember why you’re doing this.  Similarly, it is also important to remember the famous line from Seinfeld:

When you’re in your thirties it’s very hard to make a new friend. Whatever the 
group is that you’ve got now that’s who you’re going with. You’re not 
interviewing, you’re not looking at any new people, you’re not interested in 
seeing any applications. They don’t know the places. They don’t know the food. 
They don’t know the activities. If I meet a guy in a club, in the gym or 
someplace I’m sure you’re a very nice person you seem to have a lot of 
potential, but we’re just not hiring right now.

With that, I leave you with this.

DON’T PANIC. Theory and Empirics Are Both Alive & Well…at least in political science.

Paul Krugman recently wrote a post about how/why formal theory has fallen behind empirical work in prestige/prominence in economics.  I agree with Krugman that the decline (if one thinks it has occurred) is not due to behavioral social science (Kahneman & Tversky’s voluminous body of work being the most notable of this field).  Krugman argues that this can’t be because people had long known that the axioms of decision-making that undergird much of formal theory in the social sciences:

“…anyone sensible had long known that the axioms of rational choice didn’t hold in the real world, and those who didn’t care weren’t going to be persuaded by one man’s work.”

Well, I agree with this statement (for example, Adam Smith was famously well-aware of this (see The Theory of Moral Sentiments).  But I disagree that this is why behavioral economics did not “cause” the decline of theory.  Mostly, this is because behavioral economics (and behavioral economists) have been looking for a theory to unify their disparate findings.  For example, Kahneman & Tversky are arguably most famous for prospect theory. That is, Kahneman & Tversky were not merely throwing hand grenades—they were at least partially occupied with the classical task of inductive theorizing.

I don’t have any dog in the fight about the relative position of theory and empirics in economics.  And by that, I mean, I am not even sure that dogs are involved in the skirmish or even if there is skirmish worth keeping tabs on.  And, in many ways, I’m an economist.  Well, I am an economist to those who distrust economists and “just maybe an economist” to economists.  (See what I did there?)

In political science, which I proudly call my home, theory is definitely not “dead” (Krugman’s title is “What Killed Theory?”).  Rather, I like to think that, most days of the week, theory and empirics reside quite amicably side-by-side in our big tent of a discipline.  Sure, theorists make jokes about empiricists and empiricists make (typically funnier) jokes about theorists, but this is simply incentive compatibility: every empiricist chose not to be a theorist, and every theorist chose not to be an empiricist.  (Of course, many political scientists are a little bit of both, but rarely at the same time, if only because the jokes become oh so much more poignant.)  As a theorist, I (honestly) love empirical work—particularly descriptive and qualitative work that gives me fodder for new models, but also “causal” findings and quantitative conclusions that I can “get all contrarian on.”[1]

What has happened in political science during the last 20 years is a decline (in terms of number of articles published) of what one might call “pure,” or “technical,” theory.  In a nutshell, I—and others—think of social science theory as being usefully broken into two categories: pure and applied.  Pure theory (tends to) focus on the technical aspects of the model and accordingly ask more “general” questions.  The “purest” theory is inherently “untestable” outside of the theory itself: Arrow’s theorem, the Gibbard-Satterthwaite theorem, Nash’s theorem, May’s theorem, etc. all reach very general conclusions about a theoretical construct (Arrow’s theorem describes all aggregation rules (for 3 or more alternatives), Gibbard-Satterthwaite describes all choice functions (for 3 or more alternatives), Nash’s theorem describes all finite games, and May’s theorem describes all social choice functions between two alternatives, etc.).  This type of theory is hard in a specific sense: useful/explicable results are notoriously hard to obtain.  A fundamental reality of theorizing is that the expected number of results one can obtain from a model is proportional to the number of assumptions one makes.  Without belaboring the point, this difficulty is part of the reason such theory has become less prevalent in political science.  (However, as a “shameless” plug, I will note that Elizabeth Maggie Penn, Sean Gailmard, and I recently published just such a theory, entitled “Manipulation and Single-Peakedness: A General Result,” in the American Journal of Political Science (ungated version here).)

Applied theory, on the other hand, involves making more assumptions and, as a price, exerting the effort to motivate the model as descriptive or illustrative of something that either does or “could” happen.  I’ve talked about my view of the proper role of theory before.  I’ll keep it brief here and say that this type of theorizing is very much alive in political science.  Because “applied” sounds pejorative, I like to refer to this practice as “modeling,” which sounds sexier (and the Brits spell it “modelling,” possibly because their models tend to involve “maths” rather than “math”).

Relevant to Krugman’s point at least as it might be extended to political science, modern political models include some that are “behavioral” in spirit (bounded rationality, etc.) and some are more classical (common knowledge of the game, rationality, etc.) To me at least, that ‘s a distinction without a difference: the quality of a theory/model is per se independent of its assumptions.  Rather, the quality is based on what it teaches me or makes me see in new ways.  This is why “rational actor” models are useful: for example, some rational actor models can explain apparently irrational behavior.  This is important for those who see the “irrational” behavior in question and start to make conclusions about policy and institutions based on their potentially flawed inference that people are irrational per se.  Similarly, behavioral models can generate predictions and possibility results that outperform (and/or are more easily understood than) rational actor models of the same phenomenon.  I like to think of rational actor models as being like the Cantor Set (or perhaps the Banach-Tarski paradox) and behavioral models as being like Taylor polynomials.

In other words, and regardless of whether you are comparing behavioral and rational actor models or pure and applied theory, neither is better or worse than the other from an a priori perspective, just like it is nonsense to assert that a hammer is “better” than a screwdriver: it depends on whether you need to smack something or twist it.  (AND WHAT IF YOU NEED TO DO BOTH? A.K.A. “A Theory of Revise & Resubmits.”)

As a final note, it is important to note (as we are seeing in the “big data” revolution—inter alia, here and here) that the process of “…theory with empirics with theory…” is one of complements, not substitutes: the value of new theory/data increases as data/theory gets ahead of it, and conversely, the value of additional data/theories declines as theory/data lag behind.  Krugman sort of tells this story in his post, but at least doesn’t explicitly extend to the conclusion: theory and empirics have tended, and will probably continue, to cycle “in and out of fashion.”  I am fortunate that, at least right now, the big tent of political science includes active work in both areas.

With that, I leave you with this.

_________________________________________

Footnotes.

[1] I used to do empirical work, until a court (of my peers) ordered me, in the interests of both society and data everywhere, to cease and desist.

 

“Strength & Numbers”: Is a Weak Argument Better Than A Strong One?

Thanks to Kevin Collins, I saw this forthcoming article (described succinctly here) by Omair Akhtar, David Paunesku, and Zakary L. Tormala.  In a nutshell, the article, entitled “The Ironic Effect of Argument Strength on Supportive Advocacy,” reports four studies that suggest “that under some conditions…in particular, presenting weak rather than strong arguments might stimulate greater advocacy and action.”

This caught my attention because I think a lot about information and, in particular lately, “advice” in politics.  One of the central questions (in my mind at least) in political interactions is when communication can be credible/persuasive.  I was additionally attracted, given my contrarian nature, to the article because of statements such as this:

[These findings] suggest, counterintuitively, that it might sometimes behoove advocacy groups to expose their supporters to weak arguments from others—especially if those supporters are initially uncertain about their attitudes or about their ability to make the case for them. (p. 11)

Is this actually counterintuitive? I would argue, unsurprisingly since I’m writing this post, “no.”  Why not?

I have two simple models that indicate two different intuitions for this finding, both in a “collective action” tradition.  In addition to sharing a mathematical instantiation, also common to the motivations behind both of these models is the fact that the article’s findings/results are largely confined to individuals who already supported the position being advocated.  For example, weak “pro-Obama” arguments were more motivating than strong “pro-Obama” arguments among individuals who supported Obama prior to exposure to the arguments. (The effect of argument strength was insignificant and actually in the opposite direction among those who did not support Obama prior to exposure.)

My focus on collective action in this post is justified because the 4 studies reported in the paper each examined advocacy for a collective choice (either an election of a candidate and adoption of a public policy). Thus, in all studies, advocacy can potentially have an instrumental purpose: secure the individual’s desired collective choice.  Accordingly, suppose that an individual i is predisposed to support/vote for President Obama.  To keep it simple, suppose that advocacy is costly—if advocacy is less likely to affect the outcome of the election, then individual i will be less likely to perceive advocacy as being “worth it.”

Collective Action: Complementary Strength and Numbers. The question is how individual i should react to hearing a pro-Obama argument from a “random voter.”  If the argument is weak, should individual i—who, remember, already supports Obama—view advocacy as more or less likely to affect the outcome?

Well, suppose for simplicity that the election outcome is perceived to be a function like this:

f(q,n) \equiv \Pr[\rm{Obama\; Wins}] = \frac{ q * n }{1+ q * n},

where q>0 is the quality of the best argument that can be made for Obama (a content-based persuasive effect), and n>0 is the number of advocates for Obama (a “sample size”-based persuasive effect).  Then, being a bit sloppy and using derivatives (approximating n being large), the marginal value of advocacy is

\frac{ q }{1+ q * n} (1-f(q,n))

and, more importantly, the marginal effect of quality on the marginal value of advocacy (the “cross-partial”) is

\frac{2 n^2 q^2}{(n q+1)^3}-\frac{3 n q}{(n q+1)^2}+\frac{1}{n q+1}

The key point is that, for reasonable range of parameters (specifically in this case, if n and q are both larger than 1), increasing the perceived quality of the best argument that can be made for Obama reduces that the marginal instrumental value of advocacy for an Obama supporter.  Note that the perceived quality of the best argument that can be made for Obama is a (weakly) increasing function of the observed quality of any pro-Obama that one is presented with.  In other words, observing a higher quality pro-Obama argument should lower an Obama support’s motivation to engage in advocacy.

Collective Action: Increasing Persuasive Strength. For the second model, let’s pull “numbers of advocates” out and, instead, let’s modify the election outcome model as follows:

f(q) \equiv \Pr[\rm{Obama\; Wins}] = \frac{ q}{1+ q},

where q>0 is the quality of the best argument that is made for Obama.  Now, add a little bit of heterogeneity.  Suppose that a(i) is the quality of the best argument that individual i “has” in favor of Obama.  This, at least initially, is private information to individual i, and suppose it is distributed according to a cumulative distribution function G.  Suppose for simplicity that the argument to which individual i is exposed is the best he or she has yet seen (this isn’t necessary, but allows us to get to the point faster), and denote this by Q.  Furthermore, suppose that individual i will find it worthwhile to advocate (i.e., spread/share his or her own pro-Obama arguments) if a(i)>Q. (This is similar to assuming that advocacy is costless, but this is not important for the conclusion.) Then what is the probability that individual i will find it strictly worthwhile to advocate after observing an argument of quality Q?  Well, it is simply

1-G(Q)

Since G is a  cumulative distribution function, it is a (weakly) increasing function of Q.  Thus, 1-G(Q) is a (weakly) decreasing function of Q.  Again, observing a higher quality pro-Obama argument should lower an Obama support’s motivation to engage in advocacy.

What’s the point?  Well, first, I think that information is a very interesting and important topic in politics—that’s “why” I wrote this. But, more specifically, it is ambiguous how to interpret the subsequent lobbying/advocacy behaviors of individuals with respect to varying qualities of information/arguments offered by others when individuals expect that the efficacy of their lobbying/advocacy efforts is itself a function of the quality of the argument.  In these examples, in other words, individuals might not be learning just about (say) Obama, but also about how effective their own advocacy efforts will be.  If this is the case, I humbly submit that the findings are not at all counterintuitive.

With that, I leave you with this.

Want It Now? Oh, We’ll Give It To You…Later

Did the Senate ironically kill (for the time being) an immigration deal by passing an immigration bill?  Arguably, yes.

Control of the Senate is up in the air in the 2014 elections. On the other hand, the GOP seems pretty likely to maintain its majority in the House. If the GOP wins control of the Senate and holds onto the House majority in 2014, then the GOP can control the finer points of an immigration bill in the 114th Congress.  The only practical impediments standing in the way of enacting the bill would be

  1. A filibuster by Democrats in the Senate and
  2. A veto by President Obama.

President Obama has come out strongly in favor of immigration reform, so let’s set that aside. The more interesting angle is the first one.  I wrote recently about the nuclear option, though it seems like we’re now at no worse than Defcon 2. Clearly, if the nuclear option is pulled in its strongest form and legislative filibusters are effectively neutered, then (1) would no longer present an impediment.  So, let’s presume that “the button” is not pushed.

It seems incredibly unlikely that the GOP will hold 60 seats in the Senate in 2015, so the Democrats could stand together and block an immigration bill that they “did not like.”  But, is this likely now?

I argue no, for two reasons.  First, every Democratic Senator voted in favor of S. 744, the Senate’s immigration bill. For some, this was a tough vote, at least in electoral terms.  Thus, these Democrats have already sent a high profile and potentially costly signal that immigration reform is “important” and (this is important) for the Senators who viewed it as “a tough vote,” the sensible implication is that they want their skeptical constituents to believe that immigration reform is important for policy reasons (not partisan ones).

Accordingly, imagine that the GOP presents these Democratic Senators with a modified immigration bill, similar in many respects to S. 744.  Voting against, not to mention pursuing what might end up being high profile efforts to block, such a bill would be arguably seen as partisan obstructionism.  To be succinct, such efforts are not typically viewed favorably by exactly those voters who were/are skeptical of a Democratic incumbent: these are voters who tend to vote Republican but presumably might give a Democrat the benefit of the doubt if the incumbent is perceived to be competent, faithful to the state’s/the nation’s interests, etc. 

But, remember, these incumbents will have already claimed that immigration reform is important and, arguably, faithful to their states’/the nation’s interests.  In a nutshell, the worries for the Democrats right now about immigration reform are actually focused on those Democratic Senators facing reelection in 2016.  If these members can’t stand the prospect of being seen as overly partisan (or, perhaps, as a flip-flopper), then the GOP can easily count on being able to get enough Democrat cross-overs to reach 60 votes if they control the Senate’s agenda through holding a majority of its seats in the 114th Congress.

Finally, Boehner and the House Republicans are probably thinking about exactly this possibility, given the meaningful possibility that the GOP might win control of the Senate in 2014.  Accordingly, in conjunction with the apparent security of their majority in the House, the House Republicans have little to no incentive to consider any immigration bill this Congress, precisely because the Senate Democrats passed one this Congress.  Finally, note that many Senate Republicans also voted for the immigration bill, which merely strengthens the argument.

With that, I leave you with this.

I Would Manipulate It If It Weren’t So Duggan: The Gibbardish of Measurement

A fundamental consideration in decision- and policy-making is aggregation of competing/complementary goals.  For example, consider the current debate about how to measure when the “border is secure” with respect to US immigration reform.  (A nice, though short, piece alluding to these issues is here.)

A recent GAO report discusses the state of border security, the variety of resources employed, and the panoply of challenges associated the the rather succinctly titled policy area known as “border security.”  An even more on-point report was issued in February of this year.

Let’s consider the problem of determining when “the border is secure.”  This is a complicated problem for a lot of reasons, and I will focus on only one here.  Specifically, the question is equivalent to determining the “winners” from a set of potential outcomes.

In particular, there a lot of potential worlds that could follow from (say) a “border surge.”  These worlds are distinguished by measurement, a cornerstone of social science and governance. For example, consider the following three measures of “border security”:

  1. Amount of illegal firearms brought across the border, and
  2. Amount of illegal cocaine brought across the border, and
  3. Number of (new) illegal aliens in the United States.

(Note that there are lot of ways to make this even more interesting, in terms of the strategic incentives of “the act of measurement.”  For example, if you want to believe that the level of illegal firearms brought across the border is low, an arguable way to do this is to stop “looking for firearms.”  But I will leave these incentive problems to the side and focus on the incentive to misreport/massage “sincerely collected” data/measurements. Furthermore, the astute reader will note that I could pull the same rabbit out of the same hat with only two measures.)

Before continuing, note that the selection of these measurements is left to the Secretary of the Department of Homeland Security (in consultation with the Attorney General and the Secretary of Defense) who is called upon in the bill to submit to Congress a `Comprehensive Southern Border Security Strategy,” which “shall specify the priorities that must be met” for the border to be deemed secure. (Sec. 5 of S.744, the immigration bill as passed by the Senate.)

In general, of course, there are multiple ways to indirectly measure—and no direct way to measure—whether the border is “secure,” (i.e., the notion of a “secure border” is one of measurement itself) and these must be aggregated/combined in some fashion to reach a conclusion.

On the one hand, it might seem like this is a simple problem: after all, for all intents and purposes, it is a binary one: the border is secure or it is not. End of story.  AMIRITE?  No, that’s not true.  Because the issue here is that there are three potential programs to choose from.

To see this, suppose that there are three possible programs, plans A, B, and C.

Now, think about how you will/should measure if a program will result in a “secure border.”

The question at hand is how one compares the different programs.  So, to make the problem meaningful, suppose that at least one of the programs will be deemed successful and at least one will be deemed unsuccessful (otherwise the measurement is meaningless).

The Gibbard-Satterthwaite (and, even more accurately, the Duggan-Schwartz) Theorem implies that such a system can/should not guarantee that one elicits truthful reports of the measurements of all dimensions (guns, drugs, illegal aliens) in all situations, even if the measurements are infinitely precise and reliable.

Why is this?  Well, in a nutshell, in order to elicit truthful reports of every dimension of interest (i.e., guns, drugs, and illegal aliens), the system must be increasing in each of these measures.  However, this is at odds with making trade-offs.  In the context of this example, there are programs A and B such that A decreases guns but has no other effect, and B decreases drugs but has no other effect.  In this case, which program do you choose?  Putting a bunch of “reduction” in the black box, one must “eventually” choose between A and B, at least in some situation, because otherwise the measurement of guns-reduction and drugs-reduction become meaningless.

So, suppose that A decreases guns by “a little” but B decreases drugs by “a lot.” How do you compare a handgun to a pound of China White? Choose a ratio, and then imagine, if plan B was just a peppercorn shy of the “cutpoint” in terms of the reduction of drugs relative to the decrease in guns…but (say) A is $100Billion more expensive than B…what would you report about the effectiveness of B?

Well, you’d overreport the effectiveness of B (or underreport the effectiveness of A, possibly). AMIRITE?  The measures are inherently incomparable until you choose how to make them comparable.

So…what does this mean?  Well, first, that governance is hard—and perpetually so.  But, more specifically “mathofpolitics,” it clearly and unquestionably indicate that theory must come before (practical or theoretical) empirics.  In a nutshell: every non-trivial decision system is astonishingly susceptible to measurement issues: even when measurement is not actually a practical problem. For the skeptical among those of still reading, note that I only “played with” elicitation/reporting—I am happy to assume away for the moment the very real and fun issues of practical measurement.

With that, I leave you with this.

 

A Byrd in the Hand, or the 3 R’s of the Senate: Reid, Rules, & Retribution

Forceful confrontation to a threat to filibuster is undoubtedly the antidote to the malady.
–Sen. Robert Byrd (D, WV)

Filibuster reform in the US Senate has once again begun to attract attention.  In a nutshell, Majority Leader Harry Reid (D, NV) is—ahem—upset that—in his opinion, at least—Republican Senators are unreasonably holding up executive branch nominations out of either animus towards the Obama Administration, hostility to the missions of the government agencies in question, or both.  As a result, Reid is contemplating “the nuclear option,” in which the Democrats and Vice President Joe Biden, as President of the Senate, would use the essentially majoritarian character of the Senate’s rules to clarify that the Senate is—particularly when a majority is ticked off—an essentially majoritarian body in which 51 votes wins.

Senate Republicans, who hold a minority of seats, are understandably upset about the possibility of “the bomb being dropped” and are threatening retribution if Reid goes nuclear.  I will not describe the procedural details of the nuclear option, which are easily found for those who, like me, enjoy parliamentary skullduggery. Instead, I will focus on the “mathofpolitics” of Reid’s situation.

Let’s set up the problem in a succinct fashion.  Going nuclear, Reid and the Senate Democrats can at least ensure an up or down on nominees.  While it isn’t clear (feel encouraged to clarify this for/update me in the comments or “offline/online” by emailing me)  exactly how “big” of a nuclear bomb Reid will/can drop in the sense of whether it would guarantee votes on all executive nominations, including judicial, or just those to executive agencies (or some other subset), I’ll keep it simple and just presume it’ll apply to all nominations, but not legislation.

Presumably, being able to get a timely up or down vote on nominations will be good for Reid and the Democrats right now, because President Obama is a Democrat.  So, there’s the easiest argument for why Reid should “go nuclear.”

However, there are at least two frequently forwarded arguments for why Reid should not go nuclear: the “scorched earth” argument and the “uncertain majority” argument.  The scorched earth argument goes as follows: if Reid goes nuclear and ensures votes on nominations, then Senate Republicans will find new ways to halt business, and retaliate by slowing the Senate down even more.  The uncertain majority argument, on the other hand, points out that the Democrats will not hold the majority forever, and their procedural victory will eventually get used against their interests by a subsequent Republican majority. I’ll consider these arguments in turn, and then summarize a “third way” that Reid might go.

As Ezra Klein points out, the scorched earth argument is (arguably, at least right now) less powerful than one might presume.  In a nutshell, this is because one might argue that the Republicans are currently “blocking everything” anyway.  (This is shorthand—as Minority Leader Mitch McConnell (R, KY) and others have pointed out, the Senate Republicans are not blocking everything, or even all nominations.)  Accordingly, from a game theoretic perspective, one might argue that this argument should not have much impact on the Democrats’ decision to go nuclear or not.  Indeed, it suggests a rationale for why Reid and the Democrats should at least threaten to go nuclear: if the Republicans recognize that the scorched earth argument does not have much pull on the Democrats, then they will take such a threat more seriously and, to the degree Republicans do not want the nuclear option used, they will have an incentive to offer concessions to avoid its use.  (See the great series of posts essentially about this dynamic by Jonathan Bernstein and Sarah Binder, (here’s Bernstein’s response, and Binder’s response).)

Perhaps as a result of the limitations of the scorched argument in the current stand-off, the uncertain majority argument has been quickly brought forward by Senate Republicans.  Indeed, while the scorched earth argument has been publicly forwarded, too, it has essentially been quickly replaced/backed up by the uncertain majority argument. For this reason, as well as the fact that the excellent exchange between Bernstein & Binder essentially focuses on the likelihood/credibility of the scorched earth response, I will move on to the uncertain majority argument.

First, the uncertain majority argument is not dispositive. (Note that the scorched earth argument, to the degree that the minority can credibly implement it, is potentially dispositive in the sense of its irony: vote to speed things up and instead slow things down.)  This is because a bird in the hand is arguably better than two in the bush.  Reid’s experience with the Senate Republicans may have shown him that there may not even be one “in the bush.”  The real worry here is that, to the degree that Reid and the Democrats are actually tempted by the nuclear option, the GOP will presumably also be tempted by it when it gains the majority.

Let’s think about this for a second.  Thinking about the strategic situation in a little offers some insight into the relevant factors all Senators should be thinking about.

Suppose first that the nuclear option is “popular” in the sense that the public will approve (or at least not disapprove) of its use.  Well, in this case, the majority party should realize that, if they go nuclear, then—ceteris paribus—they are more likely to retain the majority.  More subtly, if we presume that this popularity is likely to hold into the near future, the majority should realize that if the minority party gains the majority in the near future, the new majority party will face a similar situation and, accordingly, the current minority party is likely to go nuclear at that point.  Both scenarios suggest that the Democrats should go nuclear: it would be popular and the minority party would do it if they gain the majority in the future.

So, under what conditions would the nuclear option not be a good choice?  Well, it boils down to two questions:

  1. Would “going nuclear” be unpopular/electorally costly? (Republican Senators are arguing that it would be.  Extra credit: given that we have a two-party system, why should one be at least a little skeptical of this statement?)
  2. If one does not go nuclear now, will “going nuclear” be popular in the future?

The first question is more pressing than the second, if only because of “bird in the hand” reasoning.  I don’t know the answer, and it’s not clear to me that anyone does, because I don’t think voters care, per se, about this procedural move: they want the Senate to “play by the rules” and “get things done” (i.e. the “cake and have it too” syndrome).  What is clear to me, however, is that Reid’s actions will frame the issue and at least partially determine the popularity of “going nuclear.”  I return to this below to conclude the post but, in a nutshell, I think this dimension is what will ultimately prevent the Democrats from going nuclear and, to be clear, I think part of that is Reid’s fault (but maybe by design).

The second question—will going nuclear be popular in the future—is truly secondary for now, but this will potentially be less true in 2016, should the Democrats hold on to the Senate majority in 2014.  This is because President Obama is a Democrat, and the Republicans’ promised uses of a “51-vote Senate” (e.g., read to the end of this) include using “their majority control to jam through a repeal of the Affordable Care Act, a repeal of the Wall Street Reform Act and other GOP priorities.”  Indeed, Sen. Lamar Alexander (R, TN) “said the GOP conference could pass with a simple majority vote legislation to weaken unions, authorization to complete the Keystone XL oil sands pipeline and other items.”

Because neither party controls two-thirds of either chamber, I have a strong suspicion that neither “a repeal of the Affordable Care Act” nor “a repeal of the Wall Street Reform Act” will actually occur before January 2017: I just can’t see President Obama signing such bills (in fact, I kind of doubt that the GOP would pass such bills even if they had the numbers).

So, I think the primary question for Senate Democrats is how electorally costly going nuclear would be.  As I alluded to above, I think going nuclear would give the GOP a useful mobilizer for the 2014 midterm elections.  Right now, the GOP doesn’t have much of “an issue” to run on in my opinion (neither do the Democrats).  “Breaking the rules to ram through executive appointments” particularly against the backdrop of the AP wiretaps/PRISM/IRS/Benghazi scandals (not to mention the for-now-arcane recess appointments dustup), might have a lot of traction with swing voters.

Reid’s actions at this point are key.  For all of the parliamentary Dr. Strangelove’s out there, this is how I would go nuclear if I were Reid.  (I’m not the first by any means to make this argument, but sometimes it’s good to repeat things that seem to make sense.)

Reid should force the GOP to take the floor.  He should essentially set aside the “tracking system” in which the Senate moves from item-to-item in a parallel fashion.  He has said the GOP is being obstructionist.  Instead of trying to invoke cloture on the nomination of (say) Gina McCarthy (currently awaiting confirmation as administrator of the EPA), Reid should bring McCarty’s confirmation up for debate, and not let the Senate move on until a vote is taken.  MAKE THE GOP FILIBUSTER/OBSTRUCT IN PUBLIC

In equilibrium, voters should not believe Reid’s claims that the GOP is obstructing business.  We as the electorate could, I suppose, crack open the Congressional Record and try to discern the counterfactuals but (1) that is actually pretty hard to do in a sensible way—if obstruction is unpopular (which it essentially must be if going nuclear will be electorally popular), then obstructionists have an incentive to obfuscate their obstructionism (say that 3 times fast), and (2) as Anthony Downs famously made clear, we not only aren’t going to, it isn’t even clear that it would be rational for us to take the time to do so.  No…if overcoming obstructionism is a big deal, Reid and the Democrats need to sit down and send the costly signal of its importance to the public by (ironically) forcing the GOP to obstruct in a visible fashion.  I don’t think Reid is going to do this, and maybe that’s the right decision, given the facts, but if he doesn’t do this, I don’t think he’ll go nuclear.  Senate rules are kind of like an A-Team episode: sure, there’s a lot of fights, but nobody ever dies.

With that, I continue a theme and leave you with this.

 

Remuneration Of The Nerds, Or “Putting the $$ in LaTeX”

I’ve been thinking a lot about signaling lately. The central idea of signaling is hidden (or asymmetricinformation. A classic example of signaling is provided by education, or more specifically, “the degree.”

Suppose for the sake of argument that a degree is valuable in some intrinsic way: a college degree is arguably worth $1.3 million in additional lifetime earnings. (Let’s set aside for the moment the level of tuition, etc., that this estimate (if true) would justify in terms of direct costs of a college degree. I’ll come back to that below.)

Instead, let’s think about the basis of this (“market-based”) value.  A simple economic story is that the education & training acquired through obtaining the degree increase the marginal productivity of the individual by (say) $1.3M.  Well, I don’t even REMEMBER much of college (and probably thankfully…AMIRITE?), so this seems unlikely.

Another, more interesting (to me at least), explanation is that the value of the degree is through its signaling value.  There are a number of explanations consistent with this vague description, including

  1. College admissions officers are good (in admissions and the act of “allowing to graduate”) at selecting the “productive” from the “unproductive” future workers.  Maybe.  College admissions is hard, and I respect those who carry the load in this important endeavor.  But…
  2. Finishing college shows “stick-to-it-iveness” and thus filters the “hard workers” from the “lazy workers.”  Again, this is undoubtedly a little true.  But there are other ways to “work hard.”  So, finally…
  3. College does 1 & 2 and, to boot, adds a “selection cherry” on top.  In particular, the idea of the college major allows individuals to somewhat credibly demonstrate the type of work they find most appealing (controlling for market valuation, to which I return below).

Explanation 3, as you might expect, is the most interesting to me.  What am I thinking?  Well, back when I was a kid, going to law school was considered a hard, but not ULTIMATELY HARD way to score some serious dough in one’s first job.  Sure, it took some money, and some serious studying, but—HAVE YOU SEEN THOSE STARTING SALARIES?  HAVE YOU SEEN “THE FIRM”?  Oh, wait.  Wait…no, seriously…YOU CAN BE TOM CRUISE AND WIN IT ALL.

On the other hand, math (pure or applied) was considered a “very good, but…come on” kind of major.  In particular, a perception (not completely inaccurate) was that math was hard, but didn’t really “train/certify” you for any job other than, perhaps, being a math teacher.  But, this argument falls on its face after a bit of thought: you can be a math teacher without being a math major.  (I’m proof of this general concept, I am a “political science teacher” and was a math/econ double major.)  So, what gives?  Why would you be a math major?

Because you are intrinsically motivated (i.e., you “like math problems”).  In other words, you are signaling a true interest precisely because there are other, arguably easier, ways to get to the same moolah.  Which means that you’ve sent a (potentially costly) signal to potential employers that this is “what makes you tick.”  This is the information that your degree provides: you have shown them costly signals of what you actually like to “stay late” and work on.

The same argument goes for majors that are both demanding and relatively specialized, (e.g., petroleum engineering, actuarial sciences): employers can be more certain upon seeing such a degree that you really want a career in this—you like it (where “it” is the substance/drudgery of what the job entails).

In other words, to the degree (pun intended) that the value of college is just about selection (explanation 1), then admission to the “marginal school” (i.e., any school that admits every applicant) should be valueless (which I don’t think it is).  If the value of college were just about “showing you can finish something” (explanation 2), then the value of college would be no different/less than completing four years of (say) military or missionary service.  (And, maybe it is no different, but many people follow such admirable service by pursuing a college degree.)

Accordingly, the fundamental signaling value of a college degree is arguably not in its possession, but in the information contained about how it was obtained.  In other words, “the major.”  Of course, there are other, but in my mind ancillary, determinants of the value of a college degree.  As my Dad told me when I was growing up (which is kind of meta),

It isn’t all about the destination—half the fun is in “getting there.”

If that wasn’t true in terms of how one’s actions are interpreted, then one’s actions are even more easily interpretable.  Stew on that for a second.

Finally, in terms of the “math of politics” of this reasoning, note that costly signals are everywhere, and they are important far beyond college: legislative committee assignments, the development of reputations by “policy entrepreneurs” (I’m looking at you, Ron Paul, Ted Kennedy, & John McCain), the development of expertise/autonomy in bureaucracies/central banks, the emergence of “neutral independence” in judiciaries, and the credibility of “dying on the hill for a cause” necessary for policy bargaining by “fringe” political groups (see: Green Party, Pro-Choice/Life groups, PETA, Tea Party, Muslim Brotherhood, ACLU).  There are many, many applications of the notion that value is assigned by selectors (voters, employers, the unmarried) in signals that more precisely reveal hidden information about the tastes/predilections/goals of those vying for selection into potentially long-term, repeated relationships.

With that, I leave you with this.

“Syllogism? I Hardly Know Him!”: The Uneasy Wedding of Gay Marriage & (Political) Conservativism

“Disambiguiating,” as wikipedia fittingly obscurely puts it,

Conservatism is a set of political philosophies that favour tradition.

My point in this post is a defense of the notion that the Supreme Court’s ruling in US v. Windsor that the “Defense of Marriage Act” is unconstitutional.  (The majority’s reasoning in that case—that Section 3 of DOMA amounted to “a deprivation of the liberty of the person protected by the Fifth Amendment.”—is something I will return to below.  For now, I will focus simply on whether the US government “ought to be in the business of” differentiating between marriages.)

Well, in a nutshell, my focus should be enough to elucidate the syllogism underlying my argument:

Major premise: The US Federal Government has not historically differentiated between different marriages.

Minor premise: A government should not a make differentiation that has not been made traditionally.

Conclusion: The US Federal Government should make a differentiation between different marriages.

Note that, at least to my (and wikipedia’s) knowledge, the Federal Government never enacted an anti-miscegenation law, the nearest analogy in my opinion.  Thus, while the Federal Government has distinguished (and still distinguishes) between the married, unmarried, and widowed, the fact is that federal law has traditionally adopted a pretty fittingly (Gertrude) Steinian position vis-a-vis marriage:

A marriage is a marriage is a marriage…

Thus, the Supreme Court struck a (in my opinion, well overdue, albeit for additional reasons) decisive blow in favor of conservativism in its ruling in Windsor.  Simply put, laws should in general be few and, more importantly, avoid whenever possible making new distinctions between people and/or actions.  (There are a couple of arguments about why I believe this, including the undesirability of providing overly powered incentives.  For example, in this case, why should society attempt to provide a differential incentive to a man or woman who thinks about marriage to marry someone of the opposite gender?  More subtly, why should the federal government tell states that they need not think about providing such differential incentives?  Finally, why should the government provide differential incentives to “get married,” as opposed to more directly focusing on providing incentives for “being a responsible adult and taking care of your children?” But that is for another post.)

Before concluding, I’ll comment briefly on what I think—as a certified non-lawyer—the “best” reasoning for overturning the statute.  The full faith and credit clause (Article IV, Section 1 of the United States Constitution, henceforth FFCC) of the Constitution is, in my opinion, quite clear.  (As is, to be fair, the clause’s essential reiteration  in federal law.)  In a nutshell, the clause states that each State must recognize as legally binding the

“public acts, records, and judicial proceedings of every other state.”

So, the question here is one of degree.  Fast-forwarding, it is reasonable and true that the FFCC does not say that each State must recognize/enforce the laws of every other state.   Why?  Well, because it doesn’t, and because it would be ridiculous if it did.  Accordingly, if one state allows its clerks to issue marriage licenses to same-sex couples, then this does not mean that any other state need issue such marriage licenses.  We see differences like this everywhere in family law: the details of acceptable grounds for divorce, alimony, community property, etc. all differ in various states.

The question here, then, is actually not about the so-called “public policy exemption” that some believe is implicit in the original text of the Constitution.  I’ll agree that such an exemption must exist: but a marriage license is not a gun license.  (Insert joke here.)  In particular, while State A does not need to recognize State B’s decision to allow you to possess a particular gun in State B, State A does not possess the power (at least under the Constitution) to assert that the gun is not yours.  To me, at least, that’s the point, and it’s a real one: I’m strongly for marriage equality (though, again, maybe for weird reasons…), but as strongly as I believe that DOMA violates the FFCC, I don’t think that the FFCC (per seguarantees all married couples “equal rights” with respect to certain “marital actions” such as adoption (because the state has a compelling public interest in regulating adoption, for example).

I admit this because, again, I have a stronger (and in some sense “Constitution-free”) argument for why states should not differentiate between marriages based on the genders of the couple in question.  Sometimes you need a hammer, and sometimes you need a screwdriver: this more specific problem needs a screwdriver, and the FFCC is a hammer.

With that, I leave you with this.

Believe Me When I Say That I Want To Believe That I Can’t Believe In You.

A recurring apparent conundrum is the mismatch between Congressional approval (about 14% approval and 78% disapproval) and reelection rates (about 91% in 2012).  If Americans disapprove of their legislators at such a high rate, why do they reelect them at an even higher rate?  PEOPLE BE CRAZY…AMIRITE?

Maybe.  A traditional explanation is that people don’t like Congress but like THEIR representatives.  Again, maybe.  Indeed, probably.  But, in the contrarian spirit of mathofpolitics, I wish to forward another explanation.  This explanation is undoubtedly wrong, but raises an interesting welfare point that could still hold even if the microfoundations of the explanation are amiss.

The heart of the explanation is that a cynical belief along the lines of “all politicians are crooks” can help voters get better/more faithful representation from their representatives.  Thus, ironically, a deep suspicion among voters about the accountability of legislators can aid one in keeping those legislators behaving in accord with the voters’ wishes.  (PEOPLE BE CRAZY LIKE A FOX…AMIRITE?)

Now, let’s get to the argument/the model.

Theoretically, elections might help achieve accountability (i.e., make incumbents do what voters want) through their potential ability to solve two important problems: moral hazard and adverse selection. The essence of moral hazard is “hidden action”: I want my representatives to work smart AND hard, but I can’t actually observe them working.  Instead, I observe noisy real-world indicators of how hard they’re working: unemployment, budget deficits, health care costs, Olympic medals, and Eurovision.  To the degree that these are correlated with legislative effort and I condition my vote choice on these observables, I can provide an incentive for a reelection-motivated incumbent to work smart and hard.

The essence of adverse selection is hidden information: I want my legislator to take actions on my behalf when they present themselves.  The best way to “get this” is to have a legislator who shares my preferences.  (Think Fenno’s Home Style.) But, practically speaking, every aspiring representative will tell me that he or she shares my preferences.  So, regardless of how I discern whether my incumbent shares my preferences, the reality is that I will have a strict incentive to reelect an incumbent who expect shares my preferences…because, after all, I generally have little information about his or her challenger’s preferences.

So, in a nutshell, you’d like to solve both of these problems simultaneously: you want your incumbent to share your preferences and work both smart and hard.  In the immortal words of Bachman Turner Overdrive, you want them to be taking care of business and working overtime. Every day. Every way.

Unfortunately, solving both of these problems is frequently impossible.  The details of why can be summed up with the old saying that “a bird in the hand is better than two in the bush.” You see, once you believe that your legislator truly has your interests at heart, you will (and, in a certain way, should) be more likely to forgive/reelect him or her if he or she is a little lazy.

This fact ends up making it harder for a voter to discipline his or her representatives: in particular, think of the following simple world.  Assume there are faithful and faithless politicians (this is fixed for any given politician) and that every politician can either work hard or be lazy. And, for simplicity, suppose that the faithful type always works hard. (This is not important, it just simplifies the exposition.)

Furthermore, let p \in (0,1) denote the probability that a random incumbent is faithful.

Here’s the rub: you don’t observe the politician’s type (faithful or faithless) or how hard he or she worked.  Rather, you observe one of three outcomes: Great, OK, or Bad.  Finally, suppose that the probabilities of observing these outcomes are

P[Great | work hard & faithful] = 0.35
P[OK | work hard & faithful] = 0.55
P[Bad | work hard & faithful] = 0.1

P[Great | work hard & faithless] = 0.3
P[OK | work hard & faithless] = 0.5
P[Bad | work hard & faithless] = 0.2

P[Great | be lazy & faithless] = 0.1
P[OK | be lazy & faithless] = 0.5
P[Bad | be lazy & faithless] = 0.4

(If you take a moment, you’ll realize that outcomes are more likely to be better for faithful types regardless of how they work and for those who work hard, regardless of their type (i.e., ceteris paribus). …JUST AS IN LIFE.)

To illustrate the problem at hand here:

Suppose that the voter observes an “OK” outcome.  what is the voter’s posterior probability that the politician is a faithful type?

\Pr[\text{faithful}|\text{outcome=OK}] = \frac{0.55p}{0.55p + (1-p)(0.5 x + 0.5 (1-x))}=\frac{0.55p}{0.5+0.05p}>p,

where x denotes the probability that a faithless politician works hard.  (Note that my judicious choice of probabilities obviates the need to worry about what this is.  YOU’RE WELCOME.)

The key point is that the probability that the politician is the faithful type, conditional on seeing only an “OK” outcome, is greater than p, the probability that a random challenger will be a faithful type.

This means that, upon seeing an “OK” outcome, you should reelect your incumbent.  He or she is a (probabilistic) bird in the hand.

So what?  Well, this all goes away if you believe that there are no faithful politicians.  That is, if you’re a hard-core cynic (as many of my FB friends purport to be), and you believe p=0, then upon observing an “OK” outcome you can credibly (and, concomitantly, “should”) throw the bum out.  If p=0, then (assuming that working hard is not too costly to the incumbent) the optimal reelection rule in this setting is to reelect if and only if the outcome is “Great.”  Furthermore, imposing such a rule in such cases will yield a higher expected outcome for you (the voter) than you can obtain in equilibrium when p>0.

In summary, it’s a lot easier to “throw the bums out” if you actually think they’re all bums. This, ironically, will both make you better off and lead to you throwing fewer out, because the “bums” will know what you think of them.

Tying this back to real-world politics, the mathofpolitics/logic of adverse selection and moral hazard suggest a somewhat subtle value of cynicism in politics.  (Which, perhaps seemingly oddly for a game theorist, is something I detest.)  What is also kind of neat about this logic is that it provides an interesting argument in favor of primaries: the whole logic of why adverse selection undermines the solution to the moral hazard problem is that the voter can not select a replacement who is “just/almost as likely as the incumbent” to have the same innate interests as the voter.  To the degree that we believe that this type of alignment between incumbents and voters is correlated with the incumbent’s partisanship, primaries offer the voters a (more) credible tool to discipline their incumbent’s moral hazard problem.

And with that, I am left thinking of Arlen Specter and accordingly want to leave you with this.