Wasted Votes

The Efficiency Gap and a Solution to Gerrymandering

Carter Hanson
20 min readJul 3, 2020

The Consent of the Governed Part 2 // Listen to this as a podcast.

Part I: Breaking a Quorum

In 2000, prior to the 2001 nationwide redistricting, the Texas congressional delegation was comprised of 18 Democrats and 12 Republicans, slightly favoring Democrats, who had received approximately the same 48% of the statewide House popular vote as Republicans. In 2001, the Republican-controlled legislature attempted to redraw the lines and gerrymander the congressional map to disadvantage Democrats. However, this effort ultimately failed, as the new maps did not pass the state legislature, and the courts stepped in and drew the maps themselves. In the 2002 House elections, Democrats won in 17 of Texas’s 32 districts, despite losing the statewide House popular vote. The Republican legislature, seeing its failure in the 2002 elections, decided to redistrict the congressional map yet again, despite it not being a decennial redistricting year.

Texas Republicans made redistricting a priority primarily because the U.S. House was fairly close between Democrats and Republicans, with Democrats holding 204 seats to Republicans’ 229. If Democrats did well in the next House elections in 2004, they could take the House and derail President Bush’s policy agenda. In a predominantly Republican state like Texas, the legislature reasoned, seats were far more vulnerable, especially if entrenched Democratic incumbents could be removed by a gerrymander (Steven Levitsky & Daniel Ziblatt, How Democracies Die, 154).

Under the guidance of U.S. House Majority Leader Tom DeLay, Republicans drew up a map that would ensure a Republican-majority congressional delegation. In How Democracies Die, authors Steve Levitsky and Daniel Ziblatt wrote: “The new map left six Democratic congressmen especially vulnerable. The Plan was pure hardball. As one analyst posited, it “was as partisan as the Republicans thought the law would allow,” (Levitsky & Ziblatt, 154).

Democrats in the Texas state legislature, desperate to stop the gerrymander, turned to Jim Dunnam, the chairman of the State House Democratic Caucus. Malcolm Gladwell interviewed Dunnam in Season 3, Episode 1 of his eclectic podcast, Revisionist History. In the interview, Dunnam said, “I had members coming up to me and say, ‘You know, Jim, you’ve got to do something,’ and I was like, ‘What are we going to do?’ I said, ‘Well, we can bust the quorum.’”

The Texas Assembly is comprised of 150 representatives; at the time, 88 of those were Republicans and 62 were Democrats. A quorum in the Assembly is 100, so Dunnam organized for 50 Democrats (as well as himself) to flee to nearby Oklahoma, as the Speaker of the House could issue arrest warrants for the insurgent Democratic legislators while they were still in the state.

Again, here’s Gladwell: “Dunnam hires buses, gets everyone to meet at a hotel in Austin, does a headcount; 50 plus himself. Doesn’t tell anyone where they’re headed or when they’re coming back; need to know basis only. It’s an undercover operation.

Monday comes and when the Republicans are ready for their triumphant vote, they suddenly realize they don’t have a quorum. They launch a manhunt for the missing Democrats…”

The Republicans were shocked and aggravated by the Democratic exodus, with Republican Governor Rick Perry calling the maneuver “cowardly and childish.” Texas Republican Chairwoman Sue Weddington said of the Democrats who were now holed up in Oklahoma, “They may believe they are clever, but the majority of Texans see them as childish.” Republicans were so frustrated in their search for the missing Democrats that, according to a CNN article, “In Austin, Republicans exhibited a deck of cards bearing the lawmakers’ pictures — similar to those issued to U.S. troops to help identify fugitive Iraqi leaders — and milk cartons bearing the images of the missing lawmakers.”

After four days of self-imposed exile in middle-of-nowhere Oklahoma, the Texas House retracted the redistricting bill. The Democratic victory was short-lived, however, as Governor Rick Perry called a special legislative session that summer, and the Democrats were caught too unaware to organize another walkout. When the bill was introduced in the state senate, Levitsky and Ziblatt described in How Democracies Die: “The Democrats, following the precedent of their House colleagues, tried to thwart the bill in absentia by boarding a plane and flying to Albuquerque, New Mexico. They remained there for more than a month, until Senator John Whitmire (soon to be known as “Quitmire”) gave in and returned to Austin,” (Levitsky & Ziblatt, 154). Whitmire’s surrender effectively signaled the end of the 2003 Democratic legislative rebellion, and the congressional lines were redrawn.

The new map succeeded in what it was designed to do: turn the map red, regardless of — or despite — the will of the people. In the 2004 elections, Republicans flipped 6 seats in Texas, winning 21 of Texas’s 32 congressional seats — that represents about 65.6% of the Texas congressional delegation, despite the GOP receiving only 57.7% of the statewide House popular vote. Sixteen years later, the Texas congressional delegation looks remarkably similar: Texas is now represented by 13 Democrats and 23 Republicans, despite Democrats’ vote share increasing from 39.0% in 2004 to 47.1% in 2018.

Following the 2003 Texas re-redistricting, LULAC (the League of United Latin American Citizens) filed suit against then-Texas governor Rick Perry. The case eventually reached the Supreme Court in 2006. I talked about LULAC v. Perry in part 1 of The Consent of the Governed, but it is worth discussing further, as it paved the way for future partisan gerrymandering cases and the introduction of the efficiency gap. Additionally, the story behind LULAC demonstrates the visceral nature of the gerrymandering issue and its importance for American democracy.

As discussed last episode, LULAC relied on partisan bias measures to prove the presence of partisan gerrymandering, and argued, primarily, that the new map violated the Equal Protection Clause of the Fourteenth Amendment, as well as the First Amendment. The majority of the court expressed interest in partisan bias but ruled that the Texas redistricting did not violate the Constitution. Furthermore, it ruled that, as long as states redistricted at least once every decade, they could re-redistrict as much as they wanted. The court did, however, order the Texas 23rd District to be redrawn, as they determined it to be an unconstitutional racial gerrymander, violating the Voting Rights Act.

LULAC is significant not for the specifics of the Texas redistricting case, but because Supreme Court Justice Kennedy, along with the 4 liberal justices, hinted that they might be open to ruling on the constitutionality of partisan gerrymandering in the future. Two years earlier, in the Vieth ruling, Justice Kennedy had expressed this openness: “[N]ew technologies may produce new methods of analysis that make more evident the precise nature of the burdens gerrymanders impose on the representational rights of voters and parties,” (Kennedy concurring, 8)

However, Kennedy would only rule if the plaintiffs utilized a measure of gerrymandering that was superior to partisan bias in four key areas. First, it could not rely on the assumption of uniform partisan swing; second, it could not use hypothetical elections; third, it must have a set threshold of unconstitutionality, which was based on historic election data; and fourth, it must be used in conjunction with other comprehensive measures of gerrymandering (Nicholas Stephanopoulos & Eric McGhee, Partisan Gerrymandering and the Efficiency Gap, 845–46).

Part II: The Efficiency Gap

The efficiency gap, a measure of partisan gerrymandering introduced in Partisan Gerrymandering and the Efficiency Gap by Nicholas Stephanopoulos and Eric McGhee in 2015, was a product of LULAC in that it hoped to address Kennedy’s four requirements and to pave the way for a definitive Supreme Court ruling on the constitutionality of partisan gerrymandering. It functioned as a calculation of each party’s wasted votes across a state, reasoning that a gerrymandering party would attempt, in the redrawing of district lines, to make the opposition party waste more votes than the gerrymandering party. As Stephanopoulos and McGhee put it: “A gerrymander is simply a district plan that results in one party wasting many more votes than its adversary,” (Stephanopoulos & McGhee, 850).

Again, Stephanopoulos and McGhee: “‘Inefficient’ [wasted] votes are those that do not directly contribute to victory. Thus, any vote for a losing candidate is wasted by definition, but so too is any vote beyond the 50 percent threshold needed (in a two-candidate race) to win a seat,” (Stephanopoulos & McGhee, 850–51). This reflects the two primary tactics employed in the gerrymander of a map: “packing” and “cracking.” Packing refers to the packing of opposition-party voters into as few districts as possible, in which opposition candidates easily win with extremely high margins. Packing large populations of opposition voters dilutes the ability of opposition voters to elect candidates of their choice in the rest of the map, and the remaining opposition can then be “cracked” across the remaining districts, further diluting opposition-voters’ power. For example, in the current Maryland congressional map, Republican voters are packed into the 1st District, such that Republicans consistently win in the district with a 20-point or higher margin. In the rest of the state, Republican voting power is cracked, and in 2018, Republicans averaged only about 27.9% of the vote in the other 7 districts, adjusting for voter turnout. The goal of a gerrymander, as demonstrated by Maryland’s current Democratic gerrymander, is to waste less votes than the other party by packing and cracking the opposition vote into oblivion.

The efficiency gap calculates the wasted votes for each party by district, first, and, second, translates that data into a simple, elegant number. The total number of wasted votes for each party in a state is calculated, and that is then compared to the number of seats each party won in that state. Stephanopoulos and McGhee defined the efficiency gap as “the difference between the parties’ respective wasted votes, divided by the total number of votes cast in an election,” (Stephanopoulos & McGhee, 851). The number produced by the calculations is a percentage, which, in the case of congressional elections, is then translated into a partisan advantage in terms of congressional seats.

Put into practice, the efficiency gap reveals the vast scope and depth of partisan gerrymandering across the country. In a 2016 Brennan Center for Justice report on the effect and extent of gerrymandering from 2012 to 2016 entitled Extreme Maps by Michael Li and Laura Royden, an analysis using the efficiency gap found that “three states had a gap of at least two seats — the standard for presumptive unconstitutionality proposed by Stephanopolous and McGhee — in every election since 2012: Michigan, North Carolina, and Pennsylvania. Republicans had sole control of the map-drawing processes in all three states, and all of the seat gaps favor Republicans,” (Michael Li & Laura Royden, Extreme Maps, 6). The Brennan Center study, however, calculated only efficiency gaps for states with six or more districts, as Stephanopolous and McGhee recommend for using the efficiency gap (Li & Royden, 17).

In my own study, I calculated the efficiency gap for all states (including states with less than six districts), going back to 2012 and including the 2018 midterm elections. I found that there are currently six Republican gerrymanders — Arkansas, Georgia, Michigan, North Carolina, Ohio, and Wisconsin — and two Democratic gerrymanders — Connecticut and Massachusetts. Arkansas and Connecticut were not included in the Brennan Center report because they both have less than six congressional districts — Arkansas has four and Connecticut five (Li & Royden, 22). Seven other states — Alabama, Indiana, Kansas, Missouri, New York, Pennsylvania, and Texas — had an efficiency gap of greater than two congressional seats in favor of Republicans at some point in the last four election cycles but were disqualified from being deemed gerrymanders because of the results of sensitivity tests.

Sensitivity testing, as described by Stephanopoulos and McGhee, is designed to measure the strength of gerrymanders in the face of large vote shifts between parties. Because the efficiency gap of a plan can change dramatically between elections depending on election results, it is important to measure map shifts both over time and in the face of large voter shifts. Thus, Stephanopoulos and McGhee argue that a map should be invalidated only if its efficiency gap exceeds the two-seat threshold at some point in its lifetime and the map never favors the opposition party if the vote shifts by 7.5% in favor of either party (Stephanopoulos & McGhee, 889). I’ll discuss the effect of sensitivity testing later.

The appeal of the efficiency gap is, above all, its simplicity: it captures in a single, tidy number all the “packing” and “cracking” that goes into a partisan gerrymander. On a fundamental level, it expresses the incredible harm that gerrymandering produces. Again, here’s Stephanopoulos and McGhee: “After voters have decided which party they support — based on whatever criteria they choose, including the attractiveness of each party’s policy agenda — the votes cast by supporters of the gerrymandering party translate more effectively into representation and policy than do those cast by the opposing party’s supporters. The gerrymandering party enjoys a political advantage not because of its greater popularity, but rather because of the configuration of district lines. The parties do not compete on a level playing field,” (Stephanopoulos & McGhee, 852–53).

Mathematically, the efficiency gap does have some benefits, and the measure generally fulfills all four of Kennedy’s criteria established in LULAC. First, the efficiency gap does not utilize the assumption of uniform partisan swing; in fact, it relies almost completely on actual election results, and translates real-world data into a gerrymandering calculation, thus fulfilling Kennedy’s second requirement. Partisan bias, in contrast, requires the formation of a hypothetical scenario in which the parties split the statewide vote equally. Occasionally, this hypothetical vote shift can produce a counterintuitive result where seats are hypothetically given to the real-world losing party — this phenomenon is called the “counterfactual window,” (Stephanopoulos & McGhee, 861). It is manifestly impossible for the efficiency gap to produce the “counterfactual window.”

The efficiency gap succeeds in Kennedy’s third criterion because the two-seat threshold of unconstitutionality for congressional redistricting is not arbitrary; on the contrary, the threshold was determined using historical election data from the 1960s through the 2010s. From Stephanopoulos and McGhee: “A gap of two or more seats placed a plan in the worst 14 percent of all plans in this era, roughly 1.5 standard deviations from the mean… A two-seat gap therefore indicates that a district plan is gerrymandered to an unusual extent and that the gerrymandering has an unusually large impact on the makeup of the House as a whole,” (Stephanopoulos & McGhee, 888).

Kennedy’s fourth criterion has more to do with the legal strategy of anti-gerrymandering plaintiffs (their use of the efficiency gap in conjunction with other gerrymandering measures) than with the efficiency gap itself, but Stephanopoulos and McGhee recognize that other strategies should also be used to build a strong case: “Of course, a mere assertion that a large efficiency gap followed inexorably from the application of a legitimate state policy would fail to rebut the presumption of unconstitutionality. A state would have to present concrete proof that its objectives could not have been realized to the same extent had it devised a plan with a smaller gap,” (Stephanopoulos & McGhee, 893).

The efficiency gap also excels in its historic assessment of gerrymandering over the past 50 years. In states with eight or more congressional districts, the net efficiency gap from 1972 to 2012 is remarkably close to zero (Stephanopoulos & McGhee, 869–70). This functions as a solid foundation that modern gerrymanders can be measured against; it also further credits Stephanopoulos and McGhee’s two congressional seat threshold.

Additionally, the efficiency gap does not utilize proportionality as a baseline to measure a gerrymander against. This is a benefit in court, more than anything, as the Supreme Court has been reluctant — if not opposed — to striking down maps where the plaintiff so much as mentions proportionality. The Supreme Court stated in the Davis v. Bandemer plurality opinion that “the mere lack of proportional representation will not be sufficient to prove unconstitutional discrimination.” The efficiency gap’s implied ideal relationship between seat and vote share is not proportional. From Stephanopoulos and McGhee: “Each additional percentage point of vote share for a party should result in an extra two percentage points of seat share,” (Stephanopoulos & McGhee, 854). Stephanopoulos and McGhee explain this disparity as a “winner’s bonus.”

Finally, the cross-election sensitivity testing, in addition to further narrowing the field of candidate gerrymanders, can be used as a measure in itself. The ability of a plan to entrench incumbents despite the will of the voters is directly measured by sensitivity testing.

However, though the efficiency gap checks most of the right boxes, it has many problems that, though not entirely disqualifying the use of the efficiency gap, recommend the measure play a secondary role in gerrymandering jurisprudence. Its problems are, in fact, so great that if adopted as the national gerrymandering standard, it would strike down some maps that are fair and maintain some maps that are not.

Kennedy’s second and third requirements are violated by the efficiency gap when you take into account uncontested races and sensitivity testing. When using the efficiency gap, uncontested elections — in which only one of the two major parties fields a candidate — can throw off the statewide calculation because it gives an unfair representation of party vote split in a district. For example, in a district that would have a 60%–40% split between Party A and Party B if they both fielded a candidate, if only Party A fields a candidate, it looks like 100% of the vote goes to Party A. This can greatly distort the efficiency gap statewide.

In order to adjust for uncontested elections, political scientists input hypothetical election results in uncontested districts: “We strongly discourage analysts from either dropping uncontested races from the computation or treating them as if they produced unanimous support for a party,” (Stephanopoulos & McGhee, 867). This can be done in a variety of ways: in my analysis I used past House and presidential election data, as well as the Cook Political Report’s Partisan Voter Index, which measures district-level partisan lean. These uncontested races are, put simply, hypothetical elections. Benjamin Plener Cover wrote in Quantifying Partisan Gerrymandering, a study published in the Stanford Law Review: “The efficiency gap may be particularly appealing — especially to Justice Kennedy — because it relies upon directly observed election data rather than hypothetical results. But if calculating the gap requires imputing hypothetical results, and if the size of the gap depends in substantial part on which method an analyst selects, the gap is less of a straightforward measure of real-world data,” (Benjamin Plener Cover, Quantifying Partisan Gerrymandering, 1188).

Additionally, the critical sensitivity testing also violates Kennedy’s second and third criteria because statewide vote shares must be shifted to gauge the strength of gerrymanders. The result of the shift is a kind of hypothetical election.

The efficiency gap is also somewhat problematic in solving Kennedy’s third requirement (that of establishing a workable threshold of unconstitutionality) because the two-congressional-seat threshold discounts gerrymanders in states with less than eight seats. Twenty-nine states (a majority) have fewer than eight congressional seats, comprising a total of 98 seats in the House, and writing them off as impossible to gerrymander is both factually wrong and democratically harmful. Wendy Tam Cho wrote in Measuring Partisan Fairness, an essay published in the University of Pennsylvania Law Review: “In their analysis, Stephanopoulos and McGhee limit their study to states with at least eight congressional districts… [T]his reduces the volatility that arises with smaller state delegations. A general measure of partisan fairness should, however, work for any size delegation […] If the efficiency gap calculation is not viable for any size delegation, this is indicative of underlying measurement issues,” (Wendy K. Tam Cho, Measuring Partisan Fairness, 20 note 10).

One of the major problems with the efficiency gap, outside of Justice Kennedy’s requirements established in LULAC, is that it discounts individual district results — and the competitiveness of those races — in favor of a statewide measure. The efficiency gap does not account for the competitiveness of district-level elections and can register close races as extremely biased in favor of a party: “EG behaves very erratically if there are districts with competitive races, because a genuinely close outcome will produce lopsided vote wastage, but it is unpredictable which side this falls on,” (Mira Bernstein & Moon Duchin, A Formula Goes to Court, 1022). For example, if Party A receives 52 votes and Party B receives 48 votes in a district (a close election by any standard), the efficiency gap calculates that Party A wastes 2 votes and Party B wastes all 48 of its votes. This produces an efficiency gap in the district of 0.46 congressional seats in favor of Party A, despite the election being within 4%.

In wave election years, in which the vast majority of competitive elections are won by one party, the statewide efficiency gap can be skewed to a very great extent. In 2018, for example, New Jersey dramatically shifted toward Democrats in House elections, flipping four Republican seats. All three competitive seats in the state elected Democrats (I define competitive races as elections in which the winning candidate’s margin was 10% or less). Because all competitive races went blue, however, the efficiency gap produced a 3.81 congressional seat advantage for Democrats statewide. In non-wave elections, New Jersey generally splits its competitive elections equally between Democrats and Republicans; this is reflected in its efficiency gap across 2016, 2014, and 2012, which averaged 1.22 congressional seats in favor of Republicans. Competitiveness is a critical measure in understanding the responsiveness of a map — the efficiency gap generally fails in this understanding.

Compounding the efficiency gap’s inability to register competitiveness is the uncommon but possible scenario of the “bipartisan gerrymander.” The bipartisan gerrymander, as described by Tam Cho, is “where the two parties, majority and minority, join forces to create a sweetheart deal where both parties are protected in safe seats, thereby preserving the status quo via non-competitive elections. Bipartisan gerrymanders, while usually not biasing one party over the other, lack responsiveness to the electorate,” (Tam Cho, 33). The efficiency gap ultimately does not measure responsiveness, only the net wasted votes across a state; if both parties work together to waste many more votes statewide, and there are approximately the same number of wasted votes for both parties, the efficiency gap will register the map as fair, despite individual voter power being essentially diluted into nonexistence.

Stephanopoulos and McGhee’s response to these problems is sensitivity testing, but, as mentioned earlier, this requires the input of hypothetical election results. Thus, the efficiency gap does not circumvent the problems that plagued partisan symmetry methods when brought before courts.

The efficiency gap can also produce counterintuitive results, especially in districts in which one party wins by a landslide. If the vote is split 75%–25% between parties, the efficiency gap of the district will be zero, even though the district may have been heavily gerrymandered in favor of the winning party. Additionally, if a party wins with more than 75% of the vote in a district, the efficiency gap will be in favor of the losing party (Bernstein & Duchin, 1022). Though Stephanopoulos and McGhee recognize this problem, they argue that it is so rare that “this is not a problem that is especially relevant to real-world redistricting,” (Stephanopoulos & McGhee, 864). That is not what I find; for example, in the 2018 midterm elections, in New York there were six congressional districts where Democrats won with greater than 75% of the vote . Additionally, there were six uncontested races won by Democrats, and though Democrats flipped three competitive seats, the efficiency gap produces a 3.80 congressional seat bias in favor of Republicans statewide — this is primarily a result of the counterintuitive efficiency gap in districts won by Democrats with greater than 75% of the vote.

The sensitivity testing prescribed by Stephanopoulos and McGhee can also be problematic. The procedure to conduct the sensitivity testing is vague, the Stephanopoulos paper reading: “We suggest shifting the actual election results by percentages derived from historical data — up to 7.5 percent in each direction for congressional plans,” (Stephanopoulos & McGhee, 889). This indefinite language can lead different efficiency gap analysis to different results. Moreover, shifting statewide vote shares by up to 7.5% can lead to unrealistic results; besides, finding a reasonable vote shift for a state leads further and further down the road of hypothetical elections, which the efficiency gap is supposed to avoid altogether. Ultimately, if conducting sensitivity testing is so critical for calculating the efficiency gap, one could simply create a measure of gerrymandering that is just the sensitivity testing component of the efficiency gap, with some adjustments; in fact, this new measure would be almost mathematically equivalent to partisan bias.

Another problem with the efficiency gap is its rejection of proportionality. The efficiency gap’s disproportionality may be a benefit in the court system, as it has continued to reject proportionality as a gerrymandering standard, but I see little reason for a standard to be disproportionate. In actuality, any gerrymandering measure must be measured against something, and the efficiency gap, while not being measured against proportionality, is measured against an implied ideal seat distribution. The efficiency gap, as Stephanopoulos and McGhee wrote, “is a measure of undeserved seat share: the proportion of seats a party receives that it would not have received under a plan with equal wasted votes,” (Stephanopoulos & McGhee, 854). Adjusting for these undeserved seats produces an odd and counterintuitive seat distribution: in 2018, the efficiency gap has an implied ideal distribution of 268 Democratic and 167 Republican representatives. Some states’ distributions look somewhat reasonable, such as Colorado’s 4 Democrats and 3 Republicans, while others do not, like New York’s 25 Democrats and 2 Republicans.

In addition to the efficiency gap’s implied ideal seat distribution, the efficiency gap actually penalizes proportionality, instead relying on double-proportionality. According to Stephanopoulos and McGhee, again, “Each additional percentage point of vote share for a party should result in an extra two percentage points of seat share,” (Stephanopoulos & McGhee, 854). Not only is this counterintuitive and unjustified, it is anti-democratic; there is no reason seats should not be allocated proportional to votes across a state (Bernstein & Duchin, 1022). Indeed, politicians are far more responsive to the preferences of their constituents if individual voter power is increased rather than diluted; this is only possible if each vote, regardless of political geography or any number of other factors, has the power to remove or maintain politicians, which, in turn, is only enabled by some degree of proportionality. Double-proportionality, as utilized in the efficiency gap, is meaningless and arbitrary — though, I admit, it may have its advantages in court (Bernstein & Duchin, 1023).

There are a number of other problems with the efficiency gap — such as its inability to take into account political geography, its nongranularity, and its conflation of wasted votes cast for winning and losing parties — but these seem far more trivial than the issue of proportionality.

Finally, though the efficiency gap claims to be simple and comprehensible — a reduction of the complexities of gerrymandering to, as Stephanopoulos and McGhee claim, a “single tidy number” — it is far from that (Stephanopoulos & McGhee, 831). It both oversimplifies gerrymandering and is far more complex than it initially seems to be. Mathematician Moon Duchin wrote, “Gerrymandering is a fundamentally multidimensional problem, so it is manifestly impossible to convert that into a single number without a loss of information that is bound to produce many false positives or false negatives for gerrymandering.”

At the same time, with the efficiency gap requiring sensitivity testing, hypothetical elections to replace uncontested races, and the compiling of historic election data to determine the threshold of unconstitutionality and the degree of hypothetical vote shifts for sensitivity testing, the calculation of the efficiency gap is anything but simple. Additionally, it fails to be comprehensive, omitting election results in 98 seats in states with fewer than eight districts.

Part III: The Future of the Efficiency Gap

Despite its many failures, the efficiency gap remains one of the most judicially workable standards of gerrymandering out there today. It passes, to some extent, Kennedy’s requirements in LULAC and, for a time, was heralded by political scientists as one of the best hopes to fight partisan gerrymandering. For a time.

When the efficiency gap was first tested in court, in the Whitford v. Gill ruling, it failed… sort of. The question of the constitutionality of partisan gerrymandering was unanswered: the case was dismissed on its standing, as the only voter who testified, William Whitford, a Democrat, did not live in a district that had been heavily gerrymandered against Democrats and could not prove that they had suffered an “injury in fact.” This technicality allowed the court to kick the can down the road for another year.

Time and again, the Supreme Court has refused the opportunity to definitively put its weight against partisan gerrymandering. The court did not completely condemn the efficiency gap as a measure, but it did state its preference of district-level, rather than state-level, measures, as it is easier to gauge the violation of individual voters’ constitutional rights at the district-level (Mark Ruch, The Efficiency Gap After Gill v. Whitford, 57). However, the court did leave the door open for the efficiency gap to be used in conjunction with other gerrymandering measures in future cases.

The reluctance of the Supreme Court to rule against partisan gerrymandering could not come at a more inopportune time: redistricting technology is only becoming more powerful, and diluting voter power is no longer something that can be accomplished only by master cartographers — on the contrary, anyone can draw a map from their living room in a matter of hours.

In the 2004 Vieth ruling, the Supreme Court called gerrymandering an “unanswerable question.” Though the efficiency gap has its (many) flaws, it is something of a workable standard — admittedly one that needs a lot of work. And there are other, more effective methods to measure gerrymandering available — which I will discuss further in future episodes of The Consent of the Governed. But the court has dismissed taking action on partisan gerrymandering apparently because it is too much work to find a functional standard to measure gerrymandering (Chief Justice John Roberts described the efficiency gap as “sociological gobbledygook” in Gill v. Whitford, and Justice Neil Gorsuch compared it to his favorite steak rub). But partisan gerrymandering remains a serious threat to American democracy, despite the court, and even if the court dismisses another hundred methods to measure gerrymandering, voting power will still be diluted unjustly.

In Bandemer v. Davis, the Supreme Court ruled that partisan gerrymandering is unconstitutional if extreme enough. Since that ruling in 1986, anti-gerrymandering advocates have been trying to prove that it is extreme enough, without much success. My response is, in the words of Wendy Tam Cho, “If you’re never going to declare a partisan gerrymander, what is it that’s unconstitutional?”

--

--

Carter Hanson

I’m Carter Hanson, a student at Gettysburg College from Boulder, CO studying political science. I love to write in-depth editorials on politics and the world.