Oil Spill Response Risk Judgments, Decisions, and Mental Models: Findings from Surveying U.S. Stakeholders and Coastal Residents

ABSTRACT This study applies a mental models survey approach to assess public thinking about oil spills and oil spill response. Based on prior interdisciplinary oil spill response research, the study first applies qualitative interview results and a response risk decision model to the design of a survey instrument. The decision model considers controlled burning, public health, and seafood safety. Surveying U.S. coastal residents (36,978 pairs of responses) through Google Insights identifies beliefs and gaps in understanding as well as related values and preferences about oil spills, and oil spill responses. A majority of respondents are concerned about economic impacts of major oil spills, and tend to see ocean ecosystems as fragile. They tend to see information about chemical dispersants as more important than ecological baseline information, and dispersants as toxic, persistent, and less effective than other response options. Although respondents regard laboratory studies as predictive of the effects of oil and of controlled burning, they are less confident that scientists agree on the toxicity and effectiveness of dispersants. The results illustrate opportunities to reframe discussions of oil spill response in terms of tradeoffs between response options, and new possibilities for assessing public opinions and beliefs during events.


INTRODUCTION
Controversies surrounding the use of chemical dispersants to mitigate oil spills from 1980 through the Deepwater Horizon incident have shown that communication with stakeholders and the general public about dispersants has long been-and remains-a problem (Bostrom et al. 1995(Bostrom et al. , 1997Pavia and Smith 1984;Pavia and Onstad 1985;Pond et al. 1997;Walker 2010Walker , 2011aWalker ,b,c, 2012Walker et al. 1997Walker et al. , 1999Walker et al. , 2001a. Further, high quality information to support decision-making is one of the perceived goals of oil spill response (Tuler et al. 2008). The research presented here represents one piece of a collaborative social science and natural science research project designed to address public, media, and political concerns and develop preparedness recommendations and response tools to facilitate well-balanced decisions under the uncertain conditions of risk that spills represent.
To develop strategies for engaging communities and individuals in discussions about spill issues, the overarching project builds on a mental models approach for risk communications (described below and in Morgan et al. 2002) and entails a relatively new approach to survey research, analysis of social media data, and integration of relevant social and natural science research findings Leschine et al. 2015;Starbird et al. 2015;Walker et al. 2015). The project has three subsidiary objectives: (1) identify key information needs and areas of confusion and misunderstanding, (2) explore the role of social media in effective risk communication, and (3) identify better methods to communicate scientific uncertainty and complexity with respect to response alternatives. The results are intended to be immediately applicable to promote effective response communications about dispersants and oil spills. Project end users include Unified Command (Federal and State On-scene Coordinators and spillers known as Responsible Parties), dispersant decision-makers from coastal Regional Response Teams (RRTs), and academia. Many of these key stakeholders are looked to by elected officials/politicians and the public for assurance about oil spill response options.

MENTAL MODELS RESEARCH APPROACH
Successful risk management and risk communication depend on knowledge of fears, needs, and values of intended audiences (Levine and Picou 2012). While causal beliefs are only one component of risk perceptions, they can be a critical element of decisions and preferences (de Bruin and Bostrom 2013). A mental model is an individual's understanding of how something works in the real world. Key stakeholders and members of the public hold a variety of risk perceptions and mental models of dispersant and oil spill processes (Gill et al. 2012;Webler and Lord 2010;Walker 2011c). As a result of the Deepwater Horizon (DWH) incident and the attention the incident received, many stakeholders may have formed technically accurate beliefs regarding dispersant and oil spill processes, and recognize the relevant scientific uncertainties. Nevertheless, evaluations from recent workshops (Walker 2012;Walker and Bostrom 2014) suggest that stakeholder mental models often omit key elements, and may focus unduly on elements that contribute relatively little to potential risk. Such misperceptions of natural processes associated with the lifecycle of an oil spill and dispersant use can influence policy implementation and public health outcomes (CRRC et al. 2012).
To assess causal beliefs, information needs, and risk perceptions and concerns of lay stakeholders, this research builds on a mental models risk communication approach. This approach reflects both the natural and engineering sciences of how risks are created and managed, and the social, behavioral and decision sciences of how people comprehend and respond to such risks (Morgan et al. 2002). The approach entails developing a decision-focused model of dispersant and oil spill processes that reflects the best relevant available science and expertise, in order to identify correct causal beliefs as well as misperceptions that might influence oil spill response decisions. By recognizing people's concerns and prior beliefs, a mental models approach can improve ways of communicating complex scientific information, such as that about oil spills and dispersant use, and empower informed decision-making (Fischhoff et al. 2011). Mental models are important as they are people's "inference engines" and show how people connect contexts or ideas (Gentner and Stevens 1983). Mental models of hazardous processes include ideas people have about identifying a risk, exposure to the risk, the effects of exposure, how to mitigate the risk, and how the risk unfolds in time.
Comparing lay causal beliefs, judgments, and decision-making with expert decision models can provide insights about information gaps and misunderstanding, which in turn help identify knowledge areas to address, thus supporting more effective communications. Mental models approaches belong to a larger category of qualitative research approaches to better understand stakeholder beliefs and perceptions concerning risk (Wood et al. 2012). The aim of such research is to discover how people think about an issue, in order to assess how new information will be interpreted so it can be designed to be most useful.
The mental models approach in this research has five steps, as described in detail by Bostrom et al. (1992), and Morgan et al. (1992Morgan et al. ( , 2002. As noted above, the first is to develop a decision model for the risk problem at hand-in this case the use of dispersants on marine oil spills-informed by the best available science. This type of decision model fits into the category of policy models, as described by Webler et al. (2011, p. 477). The second and third of these steps-in-depth mental models interviews and a subsequent survey of a representative sample of potential message recipients-are combined in de Bruin and Bostrom (2013), which provides an overview of mental models research. This approach has been applied successfully in a wide variety of domains (e.g., dispersant communications [Bostrom et al. 1995], flashfloods [Wagner 2007], injury prevention [Austin andFischhoff 2012], and wildland fire [Zaksek and Arvai 2004], among others [see de Bruin and Bostrom 2013 for a description of other applications]). While this article focuses on new survey results, the project leverages extensive work on most of the five steps, completed previously (Bostrom, et al. 1995). Prior research had revealed a lack of shared understanding among decision-makers about oil properties and fate and transport, even before the addition of dispersants (Bostrom et al. 1997;Scholz et al. 1999).

Expert Decision Model
Members of the research team developed an expert decision model for dispersant use in oil spill response through expert elicitation in the late 1990s and 2012 (Bostrom et al. 1997;Bostrom et al. 2014;Walker and Bostrom 2014; see the Appendix). The model was developed to support communications designed for stakeholders in oil spill response, and in particular for those with responsibility for response decisions. Because it is designed for response decisions and not necessarily to address those decisions faced by coastal residents, only some of the model is relevant for public survey samples. Nevertheless, it reflects the overall structure of the hazardous process, from exposure (sources, pathways, and influences on these), through effects (ecological, economic, as well as human health in the later model) and mitigation of risk. As shown in the Appendix, the key pieces in the model are: Initial oil (dispersibility), time, physical and environmental conditions, fate and transport processes, logistics, response options (best practices), and impacts of both the spill and the response.

Survey Item Selection
The initial survey questions for the project derive from mental models research with oil spill responders and stakeholders in the late 1990s (Bostrom et al. 1997;Pond et al. 1997). These were revised during survey toolkit development in 2012, through three workshops Walker and Bostrom 2014). The initial intent was to refine these questions in a small sample of cognitive interviews (i.e., "readalouds"; Ericsson and Simon 1993;Sudman et al. 1996) with spill-interested nonexperts on the Gulf coast or in Alaska.
A principal components analysis of the 2012 workshop responses to earlier versions of the candidate survey questions was used to select initial sets of items whose content and structure (including within set correlations) would be of interest. The decision model developed in the August 2012 workshop was used to guide the selection of additional questions. Interview recruitment e-mails were sent to a sample of Alaska Coastal Response Research Center (CRRC) workshop participants and Mississippi Sea Grant contacts (n = 100 total), with the anticipation of conducting at least a dozen cognitive interviews by telephone. As described below, fewer than this responded. To compensate for the small number of cognitive interviews, questions debriefing the interpretation of each item (question) were added to the survey data collection.
The team worked with Google Insights, with the aim to apply a novel multiple matrix survey design (Thomas et al. 2006;Gonzalez and Eltinge 2007) in order to elicit perceptions, beliefs, and preferences representative of coastal residents' nationally using the aforementioned survey items. Google Insights now fields up to 10 questions per respondent, but as of July 2013 could still only guarantee timely responses with no more than two questions per respondent (pairs). Thus, our design is based on pairwise questions. The analysis here focuses on overall findings, although the design pairs questions with the intent of allowing inferred response sets for multiple questions by coastal region. Survey questions were targeted at National Oceanic and 584 Hum. Ecol. Risk Assess. Vol. 21, No. 3, 2015

Perceptions of Oil Spills and Dispersant Processes
Atmospheric Administration (NOAA)-designated U.S. coastal counties (excepting those in the Great Lakes region). Google uses algorithms to analyze IPs and user behavior to infer demographics (gender, age, income, and residential density), and have validated their approach, demonstrating that response rates and sampling errors are comparable or better than those obtained with Internet panels or telephone surveys (McDonald et al. 2012). In all cases the weights do not change the gist of the relative distributions, with the exception that they reduce the proportion of "Don't Know" responses by up to 10% in some instances. However, not all responses can be weighted; in the analyses that follow, some results are weighted and some not. Given the two-question constraints, along with strict character limits on prompts, some questions were used to introduce a context for other questions, including a question regarding ocean ecosystem resilience, adapted from Holling's 1979 myths of ecological stability (Holling 1979;Holling et al. 2002;Schwarz and Thompson 1990;Thompson et al. 1990;Steg and Sievers 2000;Leiserowitz et al. 2010), two open-ended questions-one free association with chemical dispersant use, and one about oil spill information wants and needs-and a question regarding anticipated economic impacts of a spill. One intention of these context-building questions is to introduce the topic at hand (marine oils spills and chemical dispersants) in a manner that is more accessible to a lay audience. These context-building questions are each then followed by another question, which is somewhat more technical and addresses an element of the expert decision model described above (e.g., fate and transport).
In sum, the questions are designed to ask about those things we anticipate being salient in people's mental models (from prior interviews, as revised after the cognitive interviews in this study, and informed by Starbird et al. 2015), as well as to address the major elements of the hazardous process, as reflected in the decision model. This way of thinking about causality with regards to hazards and risk decisions was initially inspired by work by Kates and colleagues (Hohenemser et al. 1983).
The ocean ecosystem resilience question reflects not only beliefs about nature, but has also been hypothesized to correspond to different ways of viewing society, that is, differences in cultural cognition (Kahan et al. 2007;Steg and Sievers 2000). As shown in Figure 1, random would correspond to nature being capricious (a fatalist, culturally), threshold to nature as perverse/tolerant (hierarchist), stable or resilient to nature as benign (individualist), and fragile to viewing nature as ephemeral (egalitarian). Because evidence suggests that these paired views of nature and society are not statistically correlated (Grenstad and Selle 2000), however, our discussion of the results from these questions will focus only on beliefs about the resilience of ocean ecosystems.

Cognitive Interviews
The initial response to our interviewee recruitment e-mails for the cognitive interviews to the Alaska workshop and Mississippi Sea Grant lists was under 5%. Further, even though we screened out a significant proportion of those on the lists whose e-mails or other contact information showed them to be NOAA employees or

Perceptions of Oil Spills and Dispersant Processes
academic researchers, those who did respond to our request for participants tended to be heavily invested stakeholders with significant experience. We conducted nine cognitive interviews via Skype or telephone, five of which were with a non-expert convenience sample from Seattle. The interviews ranged from about 17 min to almost an hour.
Cognitive interview results and associated comments supported switching back from a Likert-type response scale to a True-False response scale for the knowledge questions, eliminating one question about when it was appropriate to consider source control (all respondents said always), and several minor wording changes. Non-expert respondents struggled with words like biodegrade and photo-oxidation, which we addressed by adding context or definitions. While we did eventually receive a few additional responses from Alaska regarding potential interviews, those respondents were unable to schedule/complete the interviews within a week of the initial recruitment e-mail. In order to complete data collection on schedule, we abbreviated cognitive interview collection to conduct the survey.
The first question we asked participants was open-ended: "Briefly, what information do you think should be included in a summary reference booklet on oil spill response options-including mechanical on water and shoreline strategies, controlled burning, chemical dispersants, and source control-for it to be most useful to concerned members of the public?" Responses specified wanting to know the what and how of dispersant use, its pros and cons, and contextual information and history and experience of use, as two responses illustrate: • "Response booklet should include explanation of how dispersants work, summary of rules for their use (shallow v deep water, application rate, specific variety used and why), timeline of use history -so we had a set of rules in place at DHOS that guided use of dispersants . . . how did the set of rules change as a result of DHOS use of dispersants and their impacts on the gulf ecosystem?" • "A description of the control methods, prior experience with each in other oil spills from various sources, the advantages and disadvantages of each." The first respondent above describes her experience with oil spills as follows: "I am a geologist, but I know minimal detailed info about oil geology. I began learning about oil spill response on April 20, 2010 and implemented a [grant-type omitted to protect confidentiality] grant in MS related to DHOS in fall 2010 then in spring 2013 led a group of citizen scientists through a literature review DHOS to learn about the process of science and its role in emergency response." The second reports his experience with oil spills as follows: "Experience with BP oil spill. I have an oyster farm at [location omitted to preserve confidentiality] AL, on the MS sound. There is a chain of offshore islands 12 miles offshore. The water between each island was boomed. The entrance to the bay where I am located was boomed. We had no evidence of any oil at any time after the spill as determined by periodic sampling of sediment, water and oyster meat. I would say the spill response methods were effective in preventing oil from reaching my site." It swiftly became apparent that given their investment and experience these respondents have the expertise to interpret questions about oil spill response somewhat differently than most coastal residents, for which reason the sample was supplemented with a non-expert convenience sample in Seattle, as noted above.
Analyses of the cognitive interviews were conducted iteratively, and the survey items revised throughout the process, as initially planned. One salient result of this process was that the True-False response scale was easier for interviewees to interpret and use for the candidate survey questions than was the Strongly Agree-Strongly Disagree response scale, for which reason we switched to that scale, despite questions raised about the True-False scale by the survey expert we consulted. Including midpoints in response scales for attitudes has been shown to improve reliability of responses, whereas including no-opinion options does not improve the reliability of responses (Krosnick and Presser 2010;Krosnick et al. 2002). Including an explicit "Don't Know" option increases the frequency of don't know responses, without improving the quality of the data, these authors argue. Given that uncertainty and lack of knowledge are of particular interest in oil spill contexts, we nevertheless deemed it appropriate to include an explicit don't know response category.

Survey Results
Overall, we received 36,978 responses to pairs of questions, and several thousand additional responses from individuals who did not answer the second question in the pair they were presented or who were asked debriefing questions instead. Response numbers are provided with each question analyzed. The analyses presented in this article include those responses to initial (context-setting) or secondary questions that represent unique pairs or individual answers. We exclude the set of debriefing question pairs that were asked subsequently for methodological reasons, which are available on request (the response distributions are similar). Google Consumer Insights calculates response rates based on the percentages of "impressions" (people who see the initial question) who answer it. Response rates calculated this way varied widely, from under 10% for some of the initial questions (the first in a pair), to more than 70% for some follow-up questions (the second in a pair, seen after the respondent answered the first question).

Context-Setting Questions
Initial results suggest that people see ocean ecosystems as somewhat resilient but potentially vulnerable to the cumulative effects of major oil spills (Figure 1). A plurality (33% overall) selected the threshold view of how ocean ecosystems work ("Oceans are stable within limits. With a few oil spills, the oceans will return to a stable balance. Major oil spills will lead to dangerous effects"). Next most prevalent (27.3%) was the view that ocean ecosystems are fragile ("Oceans are delicately balanced. A few major oil spills will have catastrophic effects."), which women were significantly more likely to choose than men. Least frequently selected was the view that ocean ecosystems are very stable, and that major oil spills will have little to no effects. The response options were displayed in random order. In Figure 1, results are weighted by age. The median response time for this question was 34 s, which is relatively slow in relation to other questions posed in our survey. 588 Hum. Ecol. Risk Assess. Vol. 21, No. 3, 2015

Perceptions of Oil Spills and Dispersant Processes
Residents in coastal communities see also themselves as vulnerable in the event of a spill. When asked "How do you think a major marine oil spill in your region would affect your household's economic well-being?" a plurality (43%) selected "Major effects," with far fewer responding "Minor effects" (28.8%) or "No effects" (28.2%) (response order randomly reversed, 3,564 responses, response rate 23.7%). Of those with inferred demographics (weighted by age), women were significantly more likely than men to select "Major effects" (48% vs 40.4%) and men more likely than women to select "No effects" (33.4% vs 23.5%).
As mentioned above, two open-ended questions were also included as contextsetting questions in our survey. For the open-ended questions, respondents tend to write only a word or two, response times are on the order of 15-22 s, shorter if the question appears after another question, longer if first.
The first open-ended question was: "What key information do you think should be in a booklet on oil spill response options, for it to be useful to you?" Similar to the results from the cognitive interviews, responses to this open-ended question do include mention of pros (clean up) and cons (costs) (Figure 2). However, the dominant response is "don't know," and the general picture that emerges is a focus on actions-what to do, how to prevent harm, how to clean up-with a secondary emphasis on damage and costs. Figure 2 is a graphical representation of open-ended responses received from respondents. Individual words and phrases are scaled in accordance to their relative prevalence across responses.
In Figures 3 and 4, responses to "What comes to mind first when you think of using chemical dispersants to respond to marine oil spills?" paint a general picture of a response technology that people dislike and equate with pollution, characterizing dispersants as equally polluting or worse than spilled oil. Responses in Figure 4   and on the left side of Figure 3 are from individuals who first received the ocean ecosystem resilience context-building question, and exclude four responses that were obviously nonresponsive. When grouped by sentiment, neutrals (don't know) dominate, but negative sentiments greatly outweigh positive sentiments. Response metrics from Google analytics for the data shown in Figure 3B reveal that the median response rate was 21.9 s. Figure 4 groups responses from the left cloud in Figure 3 by sentiment (Neutral, Negative, Positive) using the standard algorithm employed by Google Analytics.

Initial Conditions, Fate and Transport, Logistics
With regard to how oil behaves, as shown in Figures 5 and 6, the modal response is don't know for both whether weathering decreases oil dispersibility and whether evaporated oil is broken down by sunlight into environmentally safe compounds. However, of the remaining responses, False (and maybe false) are more prevalent responses than True (and maybe true), whereas weathering does decrease dispersibility (Daling et al. 1997;NRC 2005;Payne and Phillips 1985, see the online supplemental information [SI]).
A total of 504 respondents were asked both the ocean ecosystem resilience question along with a question (Q21) regarding the fate of evaporated oil. As illustrated in Figure 6, those who deem ocean ecosystems stable are more likely to consider it     true that evaporated oil is broken down by sunlight into environmentally safe compounds. Also those who think ocean ecosystems are fragile are more likely to deem it false that evaporated oil is broken down by sunlight into environmentally safe compounds (45.58, p < .001). Responses differ as one might expect for adherents to different models of ocean ecosystem resilience whose judgments of oil behavior are based on general inferences from, or analogies with, other toxics.

Monitoring and Response Options and Decisions
At the heart of the expert decision model are elements of the response decision itself, including baseline information, anticipated effectiveness of response options, preferences for different response options, and ways of monitoring the effectives of responses once they are implemented.

Baseline Information
The majority of respondents find it either very important or essential to know the four kinds of information suggested (Q7, Q8, Q9, Q10; Figure 7), one of which (ingredients) was suggested by the project's twitter analysis, as well as by the specialists who helped develop the decision model because they knew from experience that it was important to others.

Anticipated Response Effectiveness
As shown in Figure 8, the similarity of judgments for the removal of oil by controlled burning and the hit rate for dispersants applied by aircraft suggest that people are using a common sense model of effectiveness for difficult tasks, with the majority of responses in the lowest two categories (a small percentage, less than a third; or none or a tiny percentage). Almost half responded that they did not know if The majority (more than half) All or almost all (more than two thirds) Figure 8. Controlled burning removal percentage and chemical dispersant hit rate.

recovery of over 25% of oil spilled in the open option with mechanical response options such as booms and skimmers is common.
Nevertheless, more judged this true (34.2% includes true/maybe true) than false (18% includes false/maybe false). This suggests unwarranted optimism about the effectiveness of mechanical strategies relative to other response strategies (Allen 1988;ASTM 2006ASTM , 2007aASTM ,b, 2011; see the SI).

Decision Preferences
Dispersant use and controlled burning are more likely to be judged as never appropriate than as always appropriate, although a plurality of respondents select the middle option, sometimes appropriate (Figures 9 and 10). Responses are similar across coastal regions.

Monitoring
Monitoring is essential for both response effectiveness and health outcomes (Figure 11; SI Q17, Q18, Q23). Despite scientific evidence that sensory testing is an effective approach to assessing oil contamination of seafood (FDA 2010(FDA , 2013NOAA 2001;Yender et al. 2002; see the SI), coastal residents are more likely to think it invalid than valid, although about a third say they don't know (Figure 10). Although 39% selected don't know, the majority of the remainder think air quality monitoring enables response organizations to protect human health from oil spill responserelated air pollutants. Almost half (45%) were uncertain of the informativeness of aerial visual inspection with regard to dispersant effectiveness, with the remainder nearly equally split between True/Maybe true and False/Maybe false (Allen 1988;ASTM 2008;NRC 2005;SI Figure 11. Perceptions of the validity and effectiveness of monitoring.

Impacts (Effectiveness)
Don't know responses are also the mode for questions about the fate and transport of chemically dispersed oil ( Figure 12; SI Q11, Q26, Q27, Q28) About 30% of responses thought it was either True or Maybe True that chemical dispersants enhance natural rates of biodegradation, as compared to 21% who  thought this was false. That is, a majority (70%) either did not know or disagreed with this statement, which dispersant scientists consider true (NRC 2005; see the SI).

Impacts (Toxicity)
With respect to persistence and toxicity of oil and dispersant, a majority (50.1%) think that dispersed oil at low concentrations (54.5%) and the dispersant ten hours after application (50.3%) are toxic ( Figure 13; SI Q20, Q22, Q24, Q29). Judgments are split regarding whether ecological effects are due primarily to the dispersed oil or the dispersant, after application of dispersants (28.3% True/Maybe true; 27.8% False/Maybe false, 44% Don't know), while oil spill scientists strongly agree with this statement.

Perceptions of the Science of Oil Spill Response
Although the modal response to questions about the science of oil spill response was "don't know" (ranging from around a third to almost half-47%-of respondents for each question), more respondents were confident that laboratory studies can predict the impact of dispersed oil on marine life than not (42.6% True/Maybe true versus 14.7% False/Maybe false; Figure 14). This holds to a slightly lesser degree for controlled burning as well.
In contrast, fewer respondents think scientists agree on the effectiveness or toxicity of dispersants than not (True/Maybe true 21.2% for effectiveness, 25.2% for toxicity; False/Maybe false 31.8% for effectiveness, 31.4% for toxicity) (Figure 14). An examination of Figure 15 shows how asking this question after the contextbuilding ocean ecosystem resilience question results in a lower proportion of "don't knows," but paints the same overall picture of relative lack of faith in dispersant science. These findings echo the comments in Levine and Picou (2012, p Figure 14. Perceptions of the science of oil spill response. the effects of failed consensus among government agencies on dispersant use in the Deepwater Horizon spill. Most respondents do not feel they know whether there is scientific agreement on the effectiveness of chemical dispersants, but a majority of those responding have a tendency to think of dispersants as persistent (detectable in fish after a year), and toxic (toxicities due to dispersant rather than oil). Figure 15. Science of oil spill response when preceded by ocean ecosystem resilience question.

DISCUSSION OF RESULTS
These results suggest that coastal respondents have limited interest and knowledge about oil spill response, but despite this are negatively disposed toward dispersant use on oil spills. As suggested in the context of prior spills, oil spills may stigmatize involved organizations, places, and activities (Beamish 2001;Leschine 2002).
In all questions related to the perceptions lay respondents hold with regards to the toxicological effects of chemical dispersants, the modal response is "Don't Know." However, respondents who express a negatively oriented view towards dispersant toxicology outweigh those who express a positive orientation in every case. Coastal respondents also express doubt regarding the degree of expert consensus about the effects of dispersants. Aside from those who choose "Don't Know," the next most common response regarding whether scientists agree about the efficacy or toxicity of dispersants is "False." Viewed in conjunction, these results speak to a hearty skepticism amongst coastal respondents towards chemical dispersants generally. This is interesting in light of the fact that the questions for which "Don't Know" are chosen most frequently are those addressing the mechanisms and means by which dispersants work ( Figure 12).

Implications for Current Practice
Common sense models of the shortcomings of different response options may be driving some of the judgments exhibited in these data (i.e., respondents are skeptical of claims made regarding the efficacy and risks of response technologies). Unsurprisingly, the data also speak to a general unfamiliarity with regard to the technical aspects of oil spills and chemical dispersants. However, the pattern of missing knowledge and misconceptions of fate and transport processes suggests an opening for developing a deeper appreciation of the tradeoffs made in oil spill response decisions. Part of the challenge appears to be that respondents (understandably) seem to view all things oil-spill related in a negative light. This makes it difficult for respondents to address counterfactuals, since more often than not they view both oil spills and chemical dispersants with negative sentiment. Table 1 represents a greatly simplified contingency table of decision-making with regards to the use of chemical dispersants.
The overall pattern of responses indicates that lay respondents tend to conceptualize their answers in terms of a comparison between: (1) Scenario A and B and (2) between Scenario A and C. However, Scenario A and Scenario C are irrelevant, since dispersants would of course not be applied in the absence of a marine oil spill and the questions in the survey take spilled oil to be a given (obviating Scenario

Perceptions of Oil Spills and Dispersant Processes
A). More effectively framing this issue as a comparison between Scenario B and Scenario D might provide a more accurate conception of this issue and foster a wider understanding of oil spill response policy (more generally) and chemical dispersant use (more specifically) as rooted in tradeoffs. Similarly, respondents appear to struggle to weight response alternatives against one another. In particular, instead of comparing Scenario D and Scenario F, respondents tend to compare each response scenario to the baseline of either no response (Scenario B) or no spill (Scenario A). Empirically, elevating understanding and discourse in this regard could have significant social and environmental benefits. No matter how heightened public understanding becomes, it is expected that the use of chemical dispersants will remain somewhat controversial. However, focusing on the tradeoffs associated with dispersant use (i.e., between Scenario B and Scenario D) can potentially foster a more productive discourse amongst the public, politicians, and policy makers. Given the time demands of oil spill response and the detrimental impact of a major spill, any improvements to the decision-making process (whether said process results in the application of chemical dispersants or not) would be beneficial.