Are We Nominating Candidates the Right Way?
To identify and put forth candidates with broader appeal, we must boost political parties’ involvement in candidate selection
A few years back, New York Rep. Joe Crowley’s stock was on the rise. He was an established and experienced officeholder, having served 10 terms in the House—and some even speculated that Crowley might one day be the leader of the House Democrats, maybe even Speaker. Yet his political career was ended by the actions of one young party activist: In 2018, he was defeated in an upset by Alexandria Ocasio-Cortez, who instantly became a progressive star, at least in the media and among other party activists.
Whatever one thinks of AOC, there’s no doubt that she is one of the most progressive members of the Democratic caucus—far more in line with Democrats’ left wing than with many of the party’s rank and file. This political world—one dominated by more ideologically extreme officeholders and primary voters—has existed for quite some time. For decades, political nominees have been selected by their parties through primaries, and today, primaries are central to how parties manage candidate selection. However, the full ramifications of primaries have become clear, and in some ways, primary elections are even more significant than general elections since they are where wannabes become true political contenders.
But primaries have led to problems their creators did not foresee: Greater popular control of the process has weakened the fundamental role parties serve and resulted in a nomination process controlled by ideologically extreme players or famous party outsiders who engage in what might be characterized as a hostile takeover of the party. To ameliorate this situation, we should restore peer review in the candidate selection process—whereby party leaders and officeholders play a stronger part in determining who might be suitable to govern and who might appeal beyond the party base. These candidates might also be more likely to work better as part of the party “team.”
In short, after decades of injecting more democracy in the way we choose candidates, now may be the time for parties to have a greater role in the primary process.
The Strangeness of Parties
Political parties are strange institutions. They are often maligned in the United States for a variety of reasons—critics call them corrupt, ideologically rigid or not ideologically pure enough. Yet political scientists often seem to love them: As famous political scientist V.O. Key once wrote, “No parties, no democracy.” Across nations, history seems to teach that democratic politics need parties: Parties serve as a vital link between party elites and voters, and across institutions (for example, parties enable like-minded elected officials in different offices to work as a team).
Voters often find the political world confusing. There are many offices and policies are complicated; it takes a lot to understand how politics all fits together. But parties provide an easily understood marker for clearly delineated teams. They allow a voter to look at a ballot and immediately know who is on their “team”—who they are likely to agree with more. Even for people who don’t strongly identify with parties, parties can still provide a marker. They enable even voters with limited knowledge to decide whether to vote for or against someone who’s currently in power.
Parties help connect voters to candidates and help citizens understand what is going on in the political world. But parties are also responsible for choosing good candidates and, ultimately, good governing officials. And that means that parties can’t give in to the demands of complete citizen control—because such control leads to a distorted primary electorate, overly influenced by activists and ideologues. As a result, fewer nominees are the more moderate, problem-solving candidates many Americans desire.
Primaries: A Product of the Progressive Age
The primary process we’ve come to accept would have been quite surprising to not only politicians of the 19th century, but to politicians through most of the 20th century. Primaries initially developed in large part because of progressive reforms in the early part of the 1900s. The rise of progressivism brought with it major changes in the ways Americans viewed government and politics.
First, the progressives were inveterate reformers—they believed that for things to progress, problems needed to be solved, and old ways of trying to solve those problems were at the very least suspect. Administration, probably their forte, was handed over to people with valuable training and the skills to manage government. The creation of a trained civil service was probably their most lasting contribution—one that continues to expand today.
Second, progressives had enormous faith in democracy. They valued citizen input in government, while also preferring expert administration of a newly efficient and complicated government. Progressive reformers saw inefficiency and corruption in our politics as an infection that had to be rooted out. And with regard to selecting candidates, progressives believed that the old ways were highly compromised. Party caucuses and conventions, dominated by party bosses, maintained a corrupt and inefficient government, and that had to end.
Democratic presidential candidate Al Smith famously captured what progressives thought about the kind of change that was needed: “All the ills of democracy can be cured by more democracy.” Power should be taken from the party bosses, and instead, America should let the people decide. Primaries seemed like a natural solution.
It still took quite a long time for primaries to fully take hold nationally. For most of the 20th century, candidates were chosen at national party conventions, and most states continued to use some form of caucus or state convention to select their state delegations. These delegations were still dominated by so-called party bosses, though some states employed primaries to select delegates or parts of delegations.
Other states had “beauty contests” in which voters could provide a kind of nonbinding preference: State bosses largely controlled how delegates were chosen, though there would be an election that served as a kind of poll about which candidates were most popular with party voters. All in all, in most states, the average voter had little to no control over the real selection of delegates to the convention.
Parties remained highly decentralized, with states given a great deal of discretion in determining how delegates were selected. But in the second half of the 20th century, that would start to change. In 1960, for example, primaries were seen as a way to win some delegates and a way for outsiders to raise their profiles and prove themselves. Lyndon Johnson, a powerful insider, chose to forgo the Democratic primaries altogether, with his plan being to win the nomination at the convention itself.
Meanwhile, party leaders were wary of accepting John F. Kennedy because he was young and because he was a Roman Catholic. Therefore, Kennedy’s strategy had to be winning primaries to impress party leaders. That, in turn, would lead to winning the most delegates, and thus the nomination.
After narrowly winning the primary in Catholic-heavy Wisconsin over Hubert Humphrey, Kennedy needed to prove himself in West Virginia, a highly Protestant state, and achieve a clear victory. This he did—and party leaders around the country accepted this as a sign that Kennedy could have broad appeal. JFK won on the first ballot at the Democratic National Convention that year, and Johnson’s plan, and hope, for a divided convention where he could overtake Kennedy in later ballots was thwarted.
Too Much Democracy?
Starting in 1972, the world of presidential nominations truly changed in favor of greater democracy. The Democratic National Convention of 1968 was highly contentious—if not calamitous—for the party. With President Johnson not seeking re-election, the Democratic Party essentially engaged in open warfare between those supporting Johnson and the Vietnam War and those seeking a quick end to U.S. involvement in Southeast Asia. The assassination of Robert Kennedy only added to the confusion and anger.
Chicago, home of the convention, was a scene of chaos. Outside the police clashed with antiwar protesters while inside convention delegates clashed with each other. The eventual nominee—Vice President Hubert Humphrey—received the nomination at 2 in the morning, while the acrid smell of tear gas filled the streets of Chicago. In the wake of this debacle, the McGovern-Fraser Commission was created to investigate how the nominating process could be changed.
As political scientist Byron Shafer explained in his book “Quiet Revolution,” what the commission seemed to envision were caucuses where people came together to discuss candidates and engage in a great deal of political dialogue. At times, their ideas seem to imagine the world of idealized Athenian democracy. But such a system seems designed for political scientists—people who love talking about politics all the time—not average people with typical daily demands on their time. So instead of embracing the caucus approach, both parties in most states adopted primaries.
But at least in the eyes of party officials, this swung the pendulum too far in favor of citizen input: Fairly quickly, both parties began to search for ways to bring party elites back into the process and restore the peer review element that had been lost. This peer review did a number of valuable things. First, it meant that people who knew the candidates—as peers—could weed out candidates who were simply not acceptable to the party. This also meant that the future candidate would probably be someone who could work with others in the government. Furthermore, the party elites were less driven by ideology and more driven by the desire to appeal to the greatest possible number of voters to win the general election.
The Democrats were the first to try to move back toward greater peer review. With decisive defeats in 1972 and 1980, many party leaders felt that the nomination process was flawed. The Democrats started in 1984 with the development of superdelegates—delegates who were party officials or elected officeholders, not pledged to any particular candidate, who were given an automatic spot at the convention.
The idea was to bring elected officeholders and party officials firmly back into the process. They could pledge their support to candidates and signal to voters which of the people running they believed was the best candidate to win the general election. They were meant to provide some of the old peer review that the party had before primaries came to dominate the nomination process.
But these superdelegates simply didn’t function that way. The 2008 Democratic nomination fight between Barack Obama and Hillary Clinton shows what happened in practice. Many superdelegates did flock to Clinton—a party insider—early, but many stayed on the sidelines waiting to see how the primaries unfolded. As Obama won primaries and took a lead in pledged delegates, more superdelegates moved to support him. Several of Clinton’s superdelegates—even some who publicly said they would support her—defected to Obama. So much for peer review: Indeed, those peers followed the lead of the public rather than the other way around.
So it’s clear that in the name of democracy, parties have abdicated the peer review function that conventions often served—and superdelegates have not filled the gap. Perhaps more importantly, today’s primaries create candidates who feel that they’ve won the nomination all on their own. They feel no great loyalty to the party that eventually nominates them. The candidate, and then the president, is largely untethered from the party. In many ways, Donald Trump is a perfect example of what this system produces.
A ‘Conventional’ Solution
Political parties serve two main functions in democracies. They should reflect the will of party members, channeling that will into a nomination that is largely acceptable to them. But parties also must look beyond the will of their members and find ways to appeal to independent voters and even some members of the opposing party if they hope to win general elections.
This is where peer review comes into play. The president must understand he or she is part of a wider government and needs to work effectively with other members of the party as a team—and that he or she is president of the entire nation no matter what party their constituents belong to. While I don’t believe we can or should go back to the old days when conventions chose the nominee with little input from voters, we need to bring the twin goals of parties into a more reasonable balance.
One possibility is to revive the convention as a deciding body, but instead of it coming at the end of the primary process, it should come at the beginning. Before voters get involved, declared candidates could present themselves to a party convention of party officials and elected party members. There, in front of an audience of several thousand delegates, candidates could make speeches, meet with groups and debate one another. Then the delegates could vote, and the top handful of candidates—perhaps three or four—would proceed to the primaries.
Such a process would restore some power to the parties in selecting candidates, but not definitive power. In effect, this system could create a power sharing arrangement between party elites and the mass of party voters in which both would have a strong say in who the nominee would be.
Going into 2024, we are witnessing the weakness of the political parties—at least the Republican Party. The Democrats are in a somewhat different situation, given that incumbent presidents nearly always win renomination if they want it. (In fact, we’d have to go back to 1884 and Chester Arthur to find an incumbent denied renomination by his party.) But the Republicans in 2024 are in a quandary: They seem poised to nominate a four-time-indicted former president who seems intent on running a campaign centered around 2020.
One problem is that there are currently numerous Trump challengers dividing the anti-Trump vote, with nearly all possessing a seeming inability to offer a striking contrast with Trump (Gov. Chris Christie is the lone exception). If there were an early convention, it might be possible for just one anti-Trump Republican like Christie to emerge as a viable alternative—one who could then consolidate the anti-Trump vote. He or she might also be emboldened, by a convention endorsement, to draw a distinction between himself and Trump.
Of course, nothing is guaranteed: Politics does not really have laws like we see in physics. In reality, the best we can hope for is to foster certain tendencies. But suffice it to say, the Republicans might very well nominate a flawed candidate and do so virtually without a fight, thanks to the utter lack of peer review in our political process.
These days, the parties are prone to force ideological extremes upon a frustrated electorate that doesn’t see their views represented—no wonder we often hear voting described as choosing the lesser of two evils. No matter the strategy, we must set our sights on restoring the proper balance between democracy and expertise that lies at the heart of all large representative democracies. To think otherwise is to ignore the great historic example that parties have set. Greater party involvement in candidate selection means we’d get much closer to candidates who have broader appeal—the kinds of candidates that may excite more voters to actually get out to the polls.