Well, you’re the expert: how signals of source expertise help mitigate partisan bias

ABSTRACT Using low information rationality, citizens can compensate for their lack of political knowledge by turning to experts to help interpret and economize information. However, citizens must navigate a political media environment that is oversaturated with unqualified sources and competing cues, leading some scholars to question whether individuals are willing or able to utilize low-information rationality effectively. Much prior work focuses on partisan motivated reasoning, asserting that the influence of partisanship overwhelms that of other relevant informational cues. This is refuted by a relatively smaller subset of works, finding that the influence of partisanship is often diminished by contextual cues. I address this debate with two experimental designs that place source cues in a competing context by simultaneously manipulating expertise-related cues and partisan cues. I find that individuals do take source expertise and credibility into account, even when confronted with competing partisan source cues, helping to somewhat mitigate partisan biases.


Introduction
When gathering political information, citizens are often confronted with a media environment that is oversaturated with competing messages, sources, and cues. A Washington Post analysis conducted in 2016 found that 601 political pundits made an appearance on the three main cable news networks (CNN, Fox, and MSNBC) over an eight day period, with as many as 11 pundits on screen at once (Farhi 2016). though perhaps counterintuitive, media outlets hold debates on highly technical issues featuring perspectives from highly unqualified, non-expert sources with disturbing regularity. While there may be a minimum level of assumed source expertise when a pundit appears in the news, the question remains as to whether citizens can distinguish the experts from the non-experts when there are many competing voices. Even if citizens may think "that pundit must be credible if they made it on the news", this does not address if and how citizens choose distinguish between competing.
Perhaps the most famous (and most-parodied) example is entertainer Bill Nye's (who has little scientific experience or training) (500 Women Scientists 2018) repeated debates over climate change on CNN and Fox News with other non-expert pundits (e.g. Tucker Carlson of Fox News, Nick Loris of The Heritage Foundation). In another example, CNN has featured a plethora of opinions on childhood vaccination laws from qualified and unqualified sources, including medical doctors (e.g. Gupta 2017;Vox 2017), journalists, and celebrity actors (e.g. McCarthy and Carrey 2008). This problem has become increasingly persistent in the internet age, as non-expert pundits and elected officials receive regular airtime and column inches to spread highly partisan-influenced perspectives (e.g. President Trump and Secretary of State Pompeo's repetition of Obama Birther and Clinton Benghanzi conspiracy theories).
This creates a confusing atmosphere in which citizens may struggle to weigh multiple competing cues from expert and non-expert sources. While citizens can make semi-informed decisions through heuristic cues and low information rationality (Downs 1957;Popkin 1994;Lupia 2015), competing cues from multiple sources often disrupt this process, leading to worse decision-making (Boudreau 2013). With many competing cues, how do individuals distinguish expert opinions from lower quality perspectives?
One potential answer is expertise source cues, from which individuals can infer the qualifications of the speaker (Giffin 1967;Boudreau and McCubbins 2010;Lupia 2015). Works on partisan motivated reasoning question whether such cues help citizens make better informed decisions. Instead, these works assert that partisan biases would lead citizens to see copartisan sources as experts and opposing partisan sources as non-experts, preventing effective low information reasoning and political communication (Campbell 1960;Kiousis 2001;Cohen 2003;Iyengar, Sood, and Lelkes 2012;Bolsen, Druckman, and Cook 2014;Achen and Bartels 2016). This results in selective exposure to information sources and subsequent political polarization (Iyengar and Hahn 2009;Stroud 2011). These deeply-imbedded partisan attitudes may even prevent expert perspectives from effectively correcting misinformed beliefs and rumors (Flynn, Nyhan, and Reifler 2017;Berinsky 2017).
Yet, this research is refuted by a relatively smaller subset of works which questions the extent to which partisan cues overwhelm competing source cues (Bullock 2011;MacKenzie 2014, 2018). These works suggest that while partisan cues do exhibit great influence over political assessments and behavior, citizens will use competing cues when made readily available (Darmofal 2005;Messing and Westwood 2012;Nicholson 2012;Feldman et al. 2013Feldman et al. , 2018Leeper and Slothuus 2014;Metzger, Hartsell, and Flanagin 2015;Mummolo 2016). This suggests a somewhat more sanguine depiction of the average voter: still prone to affective partisan bias, but also willing and able to expert opinions via simple source cues.
I seek to add to the broader understanding of how individuals use source expertise cues in a polarized environment. I argue that individuals take expertise source cues into account despite their own partisan biases. I leverage two unique survey experiments, which directly pit relevant source credibility cues against partisan cues to examine their relative influence on political persuasion and information consumption. I find that the influence of partisanship, while quite strong, does not completely overwhelm the competing source expertise cues. Instead, individuals do acknowledge and seek out arguments that come from expert sources, even after accounting for the partisanship of that source. The implications suggest that small expertise cues found in simple newspaper bylines and television chyrons can promote healthier democratic news consumption habits to a small but notable degree.
Expertise, source credibility, and polarization in political communications Kunda (1990) notes that individuals are torn between two competing motivations: an accuracy motive and a directional motive. The former drives individuals to seek better quality sources to make informed, justifiable decisions while the latter incentivizes selective reasoning to avoid cognitive dissonance and preserve identities and world views. It should be noted that this model of effortful motivated reasoning is both supplemented by dual cognition models, that suggest these cues allow for more subconscious, "peripheral" reasoning by utilize cues to minimize the cognitive energy expended through the use of quick heuristics (Chaiken, Giner-Sorolla, and Chen 1996;Petersen et al. 2013.). In either case, source credibility cues aid individuals by lowering the cognitive effort need to identify accurate sources of information.
Source credibility itself is a multidimensional concept. Early models of source credibility suggest credibility is a function of perceived expertise and trustworthiness (Hovland and Weiss 1951;Hovland, Kelley, and Janis 1953;Lupia and McCubbins 1998). Apolitical works have added further dimensions. Ohanian (1990) provides a helpful overview of this literature that may not be directly applicable to the political sphere. For example, the concept of "attractiveness", measured with phrases such as "sexy" applies well to celebrities, but not as well to political source credibility. Much work has been done demonstrating that perceptions of honesty and impartiality can help individuals make more informed decisions utilizing similar low rationality principles (Lupia and McCubbins 1998;Boudreau 2009aBoudreau , 2009bBoudreau and McCubbins 2010). This research, however, focuses more narrowly on one dimension: expertise.
Expertise is an assessment of the subject's qualifications, intelligence, and competence. Expertise is multifaceted, including both the quantity and quantity of knowledge (Giffin 1967). Thus, expertise is relative, meaning that individuals ought to value information from sources that understand the consequences of potential decisions based on that specific context, guiding the listener towards the most sensible option (Boudreau and McCubbins 2010;Lupia 2015). The availability of an expert perspective serves to incentivize the use of accuracy-based reasoning, as the cognitive and resourcerelated costs of acquiring accurate detailed information has decreased. In specific circumstances, expert sources may still be biased or dishonest, and may be willing to intentionally mislead the listener. Nonetheless, expert sources offer the individual relevant and important information that nonexpert sources would not be able to provide while easing the burden of collecting detailed, accurate political information on one's own.
While prior scholarship in political source expertise offers intuitive understanding (i.e. credible sources are more persuasive than non-credible sources), there is not a thorough explanation as to how individuals judge a source's expertise, or credibility in general. One may intuitively understand that Secretary of State Colin Powell has more expertise than Jerry Springer (Druckman 2001), but it is not readily apparent why one speaker is more expert than another, or what citizens may take into account when judging a source's expertise.
When political news programs field nearly half a dozen pundits at a time, when and why is one pundit seen as more or less credible than another? In one famous example, one panelist on NBC's political talk show Meet the Press readily proclaimed to lack any scientific expertise before arguing against the existence of climate change, leading NBC to reconsider its policies on political punditry regarding scientific issues (Danielle Pletka of the American Enterprise Institute, November 25, 2018). Do citizens acknowledge that lack of expertise on the subject and discount their opinion relative to their more expert peers? Or do they simply accept or reject their opinion based on partisan assessments? It is necessary to address the expectations as to why individuals may find source credibility cues to be useful when forming opinions and making political decisions.
Thus far, little work has been done directly testing whether citizens utilize source credibility cues in a polarized political environment. Partisan cues can be informational (e.g. a Republican candidate likely supports lower taxes) (Downs 1957;Aldrich 2011). However, partisanship also triggers identitybased affective reasoning that results in partisan biases (Campbell 1960;Iyengar and Westwood 2015). 1 A number of works suggest that partisan cues overwhelm competing cues, minimizing the relative influence of important contextual information such as policy content (Cohen 2003;Turner 2007;Achen and Bartels 2016), issue positions (Iyengar, Sood, and Lelkes 2012), candidate characteristics (Bartels 2002;Goren 2002;Simas and Ozer 2017), and even scientific facts (Hart, Nisbet, and Shanahan 2011;Bolsen, Druckman, and Cook 2014;Kraft, Lodge, and Taber 2015). This results in higher levels of partisan selective exposure and political polarization, with individuals disproportionately seeking (avoiding) copartisan (opposing partisan) sources (Iyengar and Hahn 2009;Knobloch-Westerwick and Meng 2009;Stroud 2011;Knobloch-Westerwick 2012). This phenomenon may be further exacerbated by social media, as individuals have gained control over their flow of information (Bakshy, Messing, and Adamic 2015;Newman et al. 2017).
A more recent collection of works take issue with prior literature's lack of consideration for the political context. These works show that individuals take competing information into account when made readily available despite the polarized political context (Bullock 2011;Nicholson 2012;MacKenzie 2014, 2018). Economic games show that small changes in contextual incentives can decrease levels of partisan motivated reasoning and mitigate the roll of "partisan cheerleading" during cognition of political information (Bullock et al. 2015;Prior, Sood, and Khanna 2015). Selective exposure shows that partisan biases are often undercut by realistic contexts often overlooked by motivated reasoning literature (Messing and Westwood 2012;Arceneaux and Johnson 2013;Feldman et al. 2013;Darmofal 2005;Metzger, Hartsell, and Flanagin 2015;Mummolo 2016; Jacobsen 2017). 2 Most relevant to this research, the addition of source cues indicating the quality of the source has been shown to decrease political polarization and partisan-based rejection of scientific facts (Lupia 2013;Bolsen and Druckman 2015;Druckman and Lupia 2016). This latter body of evidence serves to highlight a key point: by manipulating partisanship and no other source cue or relevant information, much work on partisan motivated reasoning has been measuring the effect of partisanship in a contextless vacuum. Thus, survey respondents may be leaning extra heavily on their partisan predispositions, as it is the only information that is readily available.
Due to this, I build upon the latter set of literature, and seek to directly address the necessity for context through the direct addition and manipulation of both partisan cues and relevant contextual source expertise cues. In doing so, I hope to provide a more realistic context in which to examine the persuasive influence of both partisanship and source credibility in political communication.
As such, I put forward a simple Expertise Hypothesis, which predicts that source cues related to a speaker's credibility should incentivize the individual's accuracy motive, and lead the individual to perceive that speaker's argument to be more persuasive, even after accounting for the source's political partisanship. I expect a main effect of source credibility, indicating that individuals consider the source's credibility even when accounting for the partisanship of that source.

Materials and methods
Study 1 gave an online survey to a sample of 949 students from a large public university in the southwestern United States in the spring of 2018. Though both women and Democrats were over-represented relative to the national population, the sample was more racially diverse than the typical student sample (See Appendix C).
While not a demographically representative sample, student samples are adequate for testing simple framing effects. Concerns of bias from student samples are mitigated when the researcher can model relevant heterogeneous treatment effects based on respondent demographics (Druckman and Kam 2011;Coppock and Green 2015). Research reveals high rates of replication and treatment effect homogeneity among student and nationally representative samples across political contexts and framing experiments (Krupnikov and Levine 2014;Coppock 2018). As an extra precaution, I measured political sophistication in order to test whether sophisticates were more receptive to expertise cues (Appendix F). Results indicate little evidence that would suggest the political sophisticated were more receptive to expertise cues than less politically sophisticated respondents.
This study utilizes treatments that are similar to the New York Times' Room for Debate opinion column: a near-daily column in which two pundits or experts are invited to write opposing political opinion pieces. Like the New York Times column, the columns I have created include a short introduction featuring relevant source cue information about each author. This is not dissimilar to the typical byline or chyron featured in much of print, television, and online media.
The columns used in Study 1 discuss two political issues: labeling laws for foods containing Genetically Modified Organisms (GMOs) and automatic voter registration, with additional issue frames utilized in Study 2 (Table 1). Each article featured two authors, one arguing in favor of the given policy (pro author) and the other arguing against the policy (con author). Each article included a short introductory byline with relevant background information about the authors. While this design directly simulates a real world print news media column, this format is not far removed from cable television news formats, featuring multiple pundits. Respondents were asked to read both articles before answering questions regarding their perceptions of both the authors and their arguments. The order in which the articles appeared was randomly assigned.
The two political issues debated in these columns, GMO labeling laws and automatic voter registration, were chosen because they offer an insightful contrast in levels of partisan polarization. Evidence suggests that issues, which exhibit clear ideological and partisan signals, increase the impact of partisan cues, relative to more ideologically ambiguous issues (Chong and Mullinix 2019). While most Americans support GMO labeling laws irrespective of partisanship (Funk and Kennedy 2016), Democrats and Republicans tend to be far more divided in their support for automatic voter registration laws (McCarthy 2016). These issues were chosen because they are familiar and somewhat technical, while avoiding such high degrees of salience that real world treatment effects may mute partisan effects (Ciuk and Yost 2016;Slothuus 2016).
I manipulated the pro and con authors' biographies to include relevant information about the authors' partisan identification and level of expertise. Photos of each author were excluded to prevent potential attractiveness or race-related confounds. In every manipulation, one author's biography contained a cue indicating high levels of expertise while the other author's biography contained a cue indicating low expertise. Similarly, one author was always Republican and the other a Democrat, resulting in a 2 × 2 experiment with four total combinations. 3 The expertise-specific manipulations varied for each issue (Table 1). A separate analysis (n = 372) demonstrated that these manipulations were not confounded by perceptions of author ideology or honesty/trustworthiness, which is another distinct dimension of credibility (see Appendix). Manipulations of expertise were directly linked to the authors' professional background, signaled either through high degrees of issue-relevant education or occupational experience. These cues are not dissimilar to those leveraged in apolitical research dating back to early efforts to establish the validity of source credibility measures (Hovland and Weiss 1951;Hovland, Kelley, and Janis 1953). While professional and educational cues are just two among many potential cues that may signal expertise (Lupia and McCubbins 1998), such cues are commonly used by media outlets in author bylines and chyrons to establish the qualifications of the speaker. This dichotomous design offers a more conservative test of the Expertise Hypothesis, by eliminating confounding variables that would undermine the influence of partisanship, such as the inclusion of a third non-partisan option (Feldman et al. 2013) or an apolitical entertainment option (Arceneaux and Johnson 2013). When testing the relative influence of partisanship and expertise, this design creates contextual circumstances where one would be most likely to expect the influence of partisanship to dominate the influence of other competing cues, disproving the Expertise Hypothesis.
After reading each article, subjects were asked a series of questions used to construct two dependent variables. The first is a differenced measure of perceptions of source expertise, meant to serve as a manipulation check. Respondents were asked to rate how well specific adjectives described each of the authors: knowledgeable, qualified, experienced, and competent. Each of these adjectives was measured on a five-point Likert scale (1 = "Not well at all" to 5 = "Very well"; Cronbach's α = .903). Preliminary analysis from a pilot study reveals that these four adjectives all load onto the same latent factor and analysis from this study is consistent with these findings. These measures were averaged to get a single five-point measure of perceived expertise for each author.
The perceived expertise of the con author (the author arguing against the policy) was subtracted from that of the pro author (the author arguing in favor of the policy) creating a differenced measure of relative expertise of competing information sources, ranging from −4 to 4. Positive scores indicate that the pro author was perceived to have more expertise than the con author; viceversa for negative scores. This research focuses on expertise among competing perspectives, necessitating the differenced measure.
The second measure assessed the relative argument strength between the two authors. While the respondent's support for the policy (e.g. "Do you support or oppose GMO labels?") is a more direct measurement of opinion, it is a weak measure of the intended effect, as it is not directly linked to the expertise manipulation. To measure perceived argument strength, respondents were asked to rate the persuasiveness and effectiveness of each author's argument on seven-point Likert scales (e.g. 1 = "very unpersuasive" to 7 = "very persuasive"). These were averaged into one measure of argument strength for each author. The perceived argument strength of the con author was subtracted from that of the pro author, resulting in a measure of relative argument strength, ranging from −6 to 6. Positive scores indicate that the pro author argument was stronger than the con author argument; vice-versa for negative scores. This measure is directly linked to the treatment, making it a better measure of the average treatment effect.
For comparative purposes, analysis included a measure of policy support despite the weaknesses noted previously. Policy support was measured by asking respondents whether they sup-ported or opposed the policy (1 = "strongly oppose", 7 = "strongly support"), and whether the policy was good or bad for the average American (1 = "very bad", 7 = "very good"). Answers to these questions were then averaged for the GMO labels (Cronbach's α = .71) and automatic voter registration (Cronbach's α = .82) frames respectively. I anticipate null effects from these variables. While a short, one-time argument should be sufficiently persuasive to alter individual perceptions and assessment towards an argument or speaker, it is likely to weak to reverse an individual's long-standing policy views or reliably spillover into policy views on adjacent issues (Hopkins and Mummolo 2017). Moreover, one might expect permanent changes to policy support only to occur over time, with repeated repetition of the argument, rather than through a one-time argument.
Analysis featured two primary independent variables of interest: pro author expertise and the respondent's partisan congruence (copartisanship) with the pro author. The author expertise manipulation was measured with a simple binary variable (1 = high expertise pro author/low expertise con author, 0 = low expertise pro author/high expertise con author). Respondent partisanship was measured with the traditional branching format; leaners coded as partisans and "pure independents" omitted. Partisan congruence is measured by a binary variable (1 = copartisan pro author/opposing partisan con author, 0 = opposing partisan pro author/copartisan con author). Analysis included an interaction between both author expertise and partisan congruence to test whether a copartisan author receive a greater benefit from the expertise cue than an opposing partisan author (Additional analysis in Appendices D-F).
Should the evidence support the Expertise Hypothesis, one would expect to see a positive main effect for the expertise variable. This would indicate that respondents see the author as more expert and more persuasive even after accounting for partisan congruence. Should the Expertise Hypothesis not hold, one would expect to see a substantively small and statistically null main effect for the expertise variable. Nonetheless, one should still expect copartisanship to have a strong influence on respondent assessments. Figure 1 plots the mean level of perceived source expertise based on the experimental condition (GMO labels issue frame on the left, automatic voter registration on the right). Respondents perceived a high expertise author to have a higher level of expertise than a low expertise author across both issue frames. Yet, respondents demonstrate a notable partisan bias in the automatic voter registration frame, finding high expertise copartisans to have more expertise than high expertise opposing partisans. This serves a manipulation check, ensuring that the manipulations are working as intended. Further, these results provide preliminary support the Expertise Hypothesis, as respondents are clearly able to identify and distinguish expert sources, even when the expert is an opposing partisan. Table 2 presents standardized coefficients from an OLS regression that analyzes the effect of the expertise manipulation on perceptions of argument strength for both issue frames. The coefficients in Table 2 have been standardized, allowing one to more directly compare the magnitude of the effect sizes among the independent variables (see Appendices D-F for non-standardized coefficients and additional analyses). Once again, the dependent variable measures the relative argument strength between the two authors and competing cues. Results are consistent with polling, as the majority of respondents support both GMO labels and automatic voter registration. 4 Results indicate a modest, but positive main effect from the expertise manipulation in the GMO labels issue frame (Model G1 and G2), as a high expertise source's argument was stronger than that of a low expertise source (standardized β = .21, 6% increase). Effect sizes were likely undermined by respondent inattentiveness. Roughly 25% of the respondents spent less than 3 s on the manipulations. Moreover, the effect of expertise is much larger than that of copartisanship. The interaction effect in Model G2 suggests that copartisan sources benefited more from the expertise cue than opposing partisan sources, though this finding was statistically null. Thus, the expertise cues seem to defy convention and overwhelm the partisan cues, lending strong support to the Expertise Hypothesis.

Results
In the automatic voter registration issue frame (Model A1 and A2), results yield a less consistent main effect of expertise. The expertise variable did exhibit a small, positive, albeit null, (standardized β = .06, 2% increase, p < .07) effect in Model A1, with copartisanship exhibiting an effect twice as large (standardized β = .16, 5% increase, p < .01). Yet, Model A2 reveals a much stronger main effect for the expertise variable after accounting for a heterogeneous relationship between expertise and copartisanship. In fact, the effect size of the expertise variable (standardized β = .132, 4% increase, p < .01) was slightly larger than that of copartisanship (standardized β = .105, 3% increase, p < .05). Results also reveal that copartisans receive greater benefit from the high expertise cue. A marginal effects plot (See Appendix F) indicates that this interaction effect is indeed driven primarily by a positive bias towards copartisans rather than a punishment of opposing partisans. Interestingly, further analysis shows that this interaction effect is exclusive among Republican respondents (See Appendix F). Nonetheless, results demonstrate two substantive points: (1) expertise cues have consistent influence over respondent perceptions despite competing partisan cues, (2) expertise cues nonetheless remain a powerful and influential cue as well despite the competing contextual information.
Models G3, G4, A3, and A4 present similar results for the support variables for both issues. Analyses yielded the anticipated results, as the expertise cue had a substantively small null relationship with support for each policy. Interestingly, copartisanship yielded similar small and null effects, with the exception of Model A3. However, this effect (standardized β = .254, 2% increase, p < .01) dissipated once accounting for the interaction of expertise and copartisanship.
In summary, these results offer support for the Expertise Hypothesis. Even in an issue context that is highly polarized along partisan lines, individuals appear to be taking the source's level of expertise into account. In fact, once accounting for heterogeneity between expertise and partisan congruence, the expertise cues have a degree of influence that rivals that of partisan cues. While these effects are modest, respondents nonetheless are able to identify expertise cues when made readily available, and those cues hold a degree of influence that has a modest, but notable substantive impact on political assessments despite competing partisan considerations.
While these are encouraging results, the design in Study 1 utilizes forced exposure. Moreover, the task, while short, was cognitively taxing, as respondents had to read and consider two summarized arguments. This potentially caused lower rates of attentiveness, undermining effect sizes for all variables. Finally, results of this study indicate that individuals are able to take expert opinions into account in a politically polarized environment, but it remains unclear whether they are willing to do so. This is addressed in Study 2.

Study 2
Study 2 seeks to address whether individuals are willing to seek expert political opinions in spite of individual tendencies for partisan selective exposure.
I leverage a unique selection experiment design, manipulating both partisanship and expertise cues in a similar manner to the previous study. Selection experiments are ideal in this context, and have been utilized extensively in political science and communications to study selective exposure and partisan biases (Darmofal 2005;Iyengar and Hahn 2009;Stroud 2011;Arceneaux and Johnson 2013;Feldman et al. 2013Feldman et al. , 2018Mummolo 2016). This specific design is meant to mimic how many Americans would receive their news on online platforms like Facebook and Twitter. The context of this experiment is akin to what one may see on their social media newsfeed or trending due to the website's computer algorithms. I seek to demonstrate the behavioral influence of expertise source cues, even when in direct competition with partisan cues that may trigger selection biases.
Study 2 uses an online survey sample of 894 students from a university in the southwestern United States in the fall of 2018. Respondents were shown two competing headlines on a given political issue, one arguing in favor of a given policy (pro author), one arguing against that policy (con author). The design is very similar to Study 1, with author bylines manipulated to include both partisan and expertise-related source cues. Yet, Study 2 differs slightly from Study 1, as respondents were shown only the headlines and author bylines, not the content of the articles themselves. In addition, I manipulated partisanship in Study 2 by manipulating the news outlet: Fox News or MSNBC. This is a less direct approach than outright telling the respondents whether each author was a Republican or Democrat, yet is far more similar to a real world context and frequently utilized in past literature. Study 2 also includes two additional issues: Tariffs on U.S. trade partners and U.S. policy on military drone strikes (see Table 1). Both of these issues were chosen because they are salient in current events and divisive along partisan lines, as Republicans are more supportive of President Trump's trade tariff policy (Laloggia 2018) and the use of military drone strikes (Pew Research Center 2015) on foreign combatants. In addition, Study 2 includes a partisan control condition, in which the two authors differed in levels of expertise, but their partisanship remained constant. This control condition allows one to better evaluate whether respondents are rewarding copartisans for their identity and expertise, or punishing opposing partisans for a lack of expertise. The final result is a 2 × 3 experiment.
Once again, this experiment offers the most stringent test of the expertise hypothesis, but excluding contexts that might serve to blunt the effect of partisan biases, like apolitical options (Arceneaux and Johnson 2013) or a third, neutral ideology option (Feldman et al. 2013). I designed this experiment to maximize the potential effect of partisan biases, providing the most conservative possible test for the Expertise Hypothesis. Respondents were exposed to all four issue frames in a randomized order. Respondents were simply asked to indicate which of the two articles they would rather read before moving on to the next set of articles on a different issue. Just as in Study 1, the new manipulations were tested in a separate study which served as a manipulation check, ensuring the manipulations were not confounded by perceptions of ideology or trustworthiness/honesty (see Appendix). Table 3 presents a binary logit regression, which analyzes factors influencing whether the respondents selected the article arguing for the pro-policy position. Analysis included clustered standard errors and fixed effects for both the respondent and the issue, which were excluded from the table for parsimony. Figure 2 plots the relevant predicted probabilities. Table 3 displays a consistent, large positive main effect for the expertise condition. Respondents are roughly 13% (16% when non-voting eligible respondents are excluded) more likely to select the pro-policy argument when the author was a high expertise source, relative to a low expertise source (p < .01). Respondents are also roughly 12% more likely to select the pro argument when the author was a copartisan, relative to both an opposing partisan source and the control. (p < .01). Given the comparable effect sizes, results indicate that partisan biases did not prevent respondents from seeking an expert perspective (see Appendices D-F for further analysis).

Results
The main effect of expertise does not appear to be solely driven by a desire to reward copartisans nor a penchant towards punishing copartisans. Results indicate that both copartisans and opposing partisans benefit from the high expertise source cue to relatively equal degrees. Opposing partisans were 13.3% more likely to be selected when assigned to the high expertise treatment, relative to the low expertise condition. High expertise copartisans were 11.5% more likely to be selected, relative to low expertise copartisans. While copartisans may not have experience a disproportionate benefit from the expertise cue, the expertise cue itself did shift respondent expectations and behavior in a manner that is consistent with low information rationality and the Expertise Hypothesis. Moreover, high expertise opposing partisans received an increase in selection that brought them level to the non-partisan high expertise control, suggesting that the expertise cue has mitigated partisan biases to a notable degree.
Respondents also appear to punish opposing partisan sources more severely for a lack of expertise, relative to copartisan sources. Respondents select a low expertise opposing partisan source 4.6% less often than they select a low expertise author in the control condition. Comparatively, respondents select a low expertise copartisan author 11.2% more often than a low expertise author in the control condition. Both low expertise copartisan authors and low expertise opposing partisan authors are selected less often than their high expertise counterparts, updating their behavior in a logical and consistent fashion, supporting the Expertise Hypothesis. Respondents show a willingness to give low expertise copartisans a benefit of the doubt that is not afforded to opposing partisans, speaking to the sizable influence of partisanship. Nonetheless, the gap between high expertise and low expertise sources does not seem to vary to a great degree across partisan congruence (control, copartisan, or opposing partisan). While expertise cues exhibit strong influence over news consumption behavior despite partisan allegiances, the role of partisanship is nonetheless powerful and prevalent.
Overall, results from Study 2 provide substantial support for the Expertise Hypothesis. When given the choice, individuals show a strong preference for expert sources. This selection preference holds across several issue contexts, and remains when accounting for individual partisan biases. This is not to say that expertise source cues completely eliminate partisan biases, as individuals demonstrate strong positive biases towards copartisan sources. Nonetheless, these biases did not overwhelm the expertise source cue. Instead, respondents demonstrated a willingness to seek the perspective of an expert author when the option is available, even on highly polarizing political issues. Put another way, imagine a citizen surfing the internet or social media with many friends and sources discussing the day's political news. That citizen may be tempted to turn a blind eye and scroll past news that disagrees with their partisan worldview or has been posted by opposing partisan sources. However, results from Study 2 suggest that when the news comes from a qualified expert source, citizens may be more willing to allay their motivated suspicions and click on that post, willingly exposing themselves to new informative perspectives on politics.

Conclusion
Multiple analyses show relatively consistent support for the Expertise Hypothesis: source cues related to expertise sent a strong signal to the individuals, affecting both their perceptions of the argument itself as well as their behavior in regards to seeking information among competing arguments. This influence remained consistent despite the presence of competing partisan cues across a variety of issue contexts. These findings provide clear evidence that individuals do consider the context-relevant expertise of an author when source cues make that information readily available. These results should not downplay the influence of partisanship, as party cues remain highly influential and salient to respondents despite varying levels of expertise. The presence of expert source cues did not wholly eliminate individual partisan biases. Moreover, these findings suggest that one argument made at one time by an expert may only have a limited effect on issues opinions. Moreover, the issues presented are somewhat technical, and may not apply directly to other issue frames. Nonetheless, effect sizes for expertise were comparable to those of partisanship and respondents update their assessments and behaviors in the expected logical fashion.
The potential ramifications could be impactful on scholarly understanding of political communications, political knowledge, and democratic competence. Many prior works tend to treat source credibility and expertise either as a given or something that is wholly irrelevant due to the overwhelming influence of partisanship. Such approaches fail to address why source credibility matters and how individuals judge source credibility. This leads to the rather pessimistic, but somewhat misleading conclusion that individuals are too partisan to consider source expertise. While it is true that individuals are motivated reasoners, these results paint a more sanguine depiction of the American citizen: highly partisan, but able to consider the context and alternative information when made both salient and readily available.
While expertise source cues may not deliver a normatively desirable knockout blow to partisan bias, the implications here offer reason to be cautiously optimistic about a political messenger's ability to disseminate accurate information and citizens' levels of democratic competence. The expertise manipulations utilized in this experiment are easy-to-implement: just a brief sentence about the author's background. Despite their simplicity, they do appear to help communicate accurate information more effectively. These cues are virtually costless to implement. Media outlets may be able to increase the effectiveness of communications by providing the audience with carefully selected source cues that indicate the communicator's expertise on relevant subject matter. Such cues are already employed to some degree in newspaper columns and television news (e.g. "Representative from x district" or "author of y book"). Careful selection and increased ubiquity of source expertise cues may help media entities disseminate accurate information and help the average citizen identify useful information without using additional resources or effort. To reiterate, using these cues is virtually costless. Even if using expert source cues helps to a marginal degree, the utility of these cues far outweighs the cost.
Pundits want to appear more credible, and will often muddle expertise framing intentionally. For example, Former White House Deputy Assistant Sebastian Gorka maintains a controversial instance on being referred to as "Dr. Gorka Ph.D.". This in spite of questions regarding the validity of his degree and journalistic guidelines that reserve the term "doctor" for medical professionals (Borchers 2017). Moreover, this research does not address situations in which two unqualified pundits debate each other. This highlights the responsibility of journalists to choose their contributors carefully so as to avoid confusing the audience. Without healthy and careful journalistic practices, media outlets may undercut the strength of their arguments, and the persuasiveness of expert opinions.
The research in its current form applies well to news sources, political candidates may offer a unique challenge, as voters may exhibit a penchant for "outsider", non-expert candidates. While then nominee Donald Trump had little political experience, this may have contributed to a perception as "outsider", anti-elitist candidate. Future analysis may also benefit from the increased consideration of individual-level factors, such as political sophistication. Research utilizing a control condition that removes expertise cues may also be useful in future research.

Disclosure statement
No potential conflict of interest was reported by the author(s).

Notes on contributor
Adam Ozer recently received his Ph.D. from the University of Houston.