Using Behavioral Skills Training to Teach Peer Support Workers to Respond to Ethical Scenarios

ABSTRACT Peer support workers are people living with a mental illness and/or substance use disorder who use their lived experience and training to support people in recovery. Setting boundaries when faced with an ethical scenario is an important skill that peer support workers must acquire. This report from the field examined the effects of group-based behavioral skills training (BST) to teach peer support workers to set boundaries by restating their needs, saying they cannot engage in the requested behavior, and redirecting them to an appropriate resource or response. Four of five participants met the mastery criterion after BST plus supplemental experimenter feedback. Moreover, participants found the training acceptable. These results suggest BST may be useful to teach ethical skills to peer support workers within the context of a public health workforce development program.


Introduction
Peer support workers, or simply peers, are people in long-term recovery from a mental illness and/or substance use disorder who use their lived experience and training to help others on their recovery journey (Substance Abuse and Mental Health Services Administration, 2020).Peer support increases engagement in care, sense of hope, and social networks and decreases inpatient and emergency service usage for people living with a mental illness (Chinman et al., 2014;L. Davidson et al., 2012;Pitt et al., 2013).Peer support also reduces relapse and increases satisfaction in treatment for people living with substance use disorders (Reif et al., 2014).Peer support has been Medicaid reimbursable since 2007 (Smith, 2007) and there are approximately 30,000 peer support workers in the USA (Mental CONTACT Jonathan A. Schulz jonathan.schulz@uvm.edu; 1 South Prospect Street, Burlington, VT 05401, USA This research was completed in partial fulfillment of the requirements for the first author's doctorate degree from the Department of Applied Behavioral Science at the University of Kansas.The first author is now at the Vermont Center on Behavior and Health at the University of Vermont.
Supplemental data for this article can be accessed online at https://doi.org/10.1080/01608061.2023.2198740 Health America, n.d.).The skills peers need to be successful are outlined in SAMHSA's core competencies for peer providers (SAMHSA, 2015) and include listening, telling their story, and helping in crisis.Peers learn these skills through a certification process and additional training, which vary by state (Doors to Wellbeing, n.d.Kaufman et al., 2016).Similar to other credentials, certification establishes standards that reflect the minimum skills needed for a professional (Salzer et al., 2010).Therefore, peers commonly require additional training to learn skills not addressed through certification.
Peers must also learn to behave ethically and in accordance with various ethical codes of conduct, such as the National Certified Peer Recovery Support Specialist (NCPRSS) Code of Ethics (National Association for Alcoholism and Drug Abuse Counselors, n.d.) and the State of Kansas Certified Peer Specialist Code of Ethics (Kansas Consumer Advisory Council for Adult Mental Health, n.d.).Example codes for peers include maintaining confidentiality, refraining from using undue influence, and ensuring peers have no conflicts of interest.These codes ensure peers are behaving in the best interest of those with whom they are working.Setting clear boundaries is another aspect of the peer ethical code.The definition of a boundary varies, but generally a boundary relates to the behavior that acceptable for a professional to do (Aravind et al., 2011;Doel et al., 2009).Boundary crossings involve an unacceptable act counter to one's professional or personal codes, such as giving a community member money or sharing their personal phone number.Crossing boundaries can lead to harm for both the client and the practitioner, including legal issues and a loss of trust or increased dependence on the practitioner (Bonosky, 1995;Fronek et al., 2009).To avoid the negative effects of boundary crossings, peers must learn how to appropriately respond when a client presents an ethical scenario (i.e., how to set a boundary and inform the client what the professional can and cannot do).
Behavioral skills training (BST) is an empirically based training procedure comprised of instructions, modeling, role-play, and feedback (Parsons et al., 2012;R. Miltenberger, 2016).Numerous studies have demonstrated the effectiveness of BST across settings, behaviors, and populations (e.g., Chambers & Radley, 2019;R. G. Miltenberger et al., 2004;Stocco et al., 2017).Despite this rich literature, researchers have not yet evaluated the effectiveness of BST for teaching ethical skills.Lerman et al. (2015) note that formal training in ethics is an important, yet often overlooked, topic for people who work with others.They recommend trainers provide trainees with ethical scenarios that can be discussed, modeled, and role-played.The purpose of this report from the field is to describe the effects of a remote group-based BST to teach a novel skill, boundary setting in response to an ethical scenario, to a novel population, peer support workers, within the context of a public health peer-support workforce-development program.

Participants and setting
Participants were five peers in a local workforce development program that trains people in long-term recovery from living with a mental illness and/or substance use disorder to become peer support workers.The mean age of the participants was 25.8 years, and they were in recovery for an average of 4.5 years (range, 1-10 years).A majority had completed some college (Table 1).
The workforce development program is a 1-year program that partners with various community organizations to integrate peer support throughout the local behavioral health system.The peers who participate in this program work for 15 hr a week and receive 3 hr of training and 1 hr of supervision weekly.Training covers a wide range of topics, including the SAMHSA core competencies for peer workers, listening skills, community resources, crisis intervention skills, and ethics.The current training was implemented as part of a weekly training period; therefore, the training took place in 1 day.
The training was conducted in January 2021.Due to COVID-19 safety restrictions, the training was conducted online through Zoom teleconferencing (version 5.0.4), with participants and trainers located at their homes.Potential participants received their consent forms through e-mail 1 week before baseline.Before collecting baseline data, the consent form was reviewed vocally, and participants were provided the opportunity to ask questions before the training.Participants voluntarily signed the consent form online through REDCap (Harris et al., 2009) to consent to data de-identification and dissemination.The local university's Human Research Protection Program approved all study procedures (Study #00145979).

Peer actors
Three peers, or peer actors, who previously completed the program assisted with the creation and facilitation of the training and were compensated for this work.These peer actors, who all identified as women, aided in creating videos for the training and conducting baseline and post-training trials.Two of these peer actors also assisted in facilitating the training.All peer actors completed at least 1 year of training as peers and received training by the experimenter on leading a role-play session and providing feedback.The peer actors were selected in collaboration with a local public health leader and because they were experienced fellows whom members of the leadership team deemed as competent.

Training scenarios
Two 90-min focus groups consisting of four peer support workers who had been in the program for a year were conducted to identify important areas for training.In addition to identifying this boundary training as important, training and generalization scenarios related to boundary issues were identified during these focus groups and from examples in the peer support literature.A total of 19 scenarios describing a community member crossing a boundary, two scenarios of either a coworker or supervisor crossing a boundary were identified (Supplemental Table S1), as well as the code or guideline being crossed.The three peer actors rated each scenario on a 6-point Likert-type scale (1 = extremely easy; 6 = extremely difficult).Scores were then averaged for each scenario, with higher scores indicating more difficult scenarios.Five of the 19 scenarios were used for training and the remaining 14 for experimental trials.

Training videos
We selected five of the 19 scenarios for use in the modeling component of BST.Scenarios ranged in difficulty; we first presented easier scenarios and progressively increased the difficulty during the training.Using Zoom, we recorded two peer actors who were in separate locations -one playing the part of the community member and one playing the part of a peercorrectly and incorrectly responding to the five scenarios.Each scenario with incorrect responding modeled a different step implemented incorrectly.

Experimental trial videos
For the remaining 14 scenarios that were not used in the training videos, we created short videos of a peer actor attempting to cross the boundary reflected in the scenario.The purpose of these videos was to provide participants with an opportunity to demonstrate how they would respond to a person attempting to commit a boundary crossing.We also made videos for two additional scenarios for the generalization probes.Thus, videos were created for a total of 16 scenarios, each of which had two parts associated with it: (a) a peer actor crossing a boundary; and (b) the peer actor attempting to repeat the boundary crossing in response to an initial "no."Peer actors engaged in approximately two to three statements to set up the scenario before attempting to cross the boundary (e.g., "Hey!Do you have like five bucks?I can pay you back, I just need to buy some food right now because I don't have any.").The second part of the scenario, which involved the repeated boundary-crossing, included a short statement and depicted the peer actor attempting to cross the boundary for a second time in response to a peer saying "no" (e.g., "It's just five bucks.").
Each participant experienced a different scenario for each trial; the order was determined a priori to control for difficulty.Specifically, we used the difficulty ratings peer actors provided for each scenario to ensure that the average difficulty of all scenarios implemented during baseline matched the average difficulty of all scenarios implemented during post-training.Once we selected the scenarios, we used a random picker (Picker Wheel, n.d..) to randomize the order in which we presented the scenarios in each condition.

Response measurement
The primary dependent variable was performance on boundary setting, which was measured as the percentage of steps in a task analysis (TA) completed correctly.Steps of the TA required participants to (1) orient to the community member, (2) restate the community member's needs, (3) set a boundary, (4) redirect the community member to appropriate resources or another topic, and (5) repeat steps 3 and 4 as necessary.Each experimental trial provided participants with an opportunity to set a boundary by responding to a contrived scenario in the form of a video of a hypothetical community member attempting to cross a boundary.We tested for Step 5 (i.e., repeats Steps 3 and 4) on one trial in baseline and one trial in post-training by requiring the participant to respond to the second part of the video described above (i.e., part b).Of note, the existing peer literature did not provide an appropriate TA for how to respond to a boundary violation; therefore, this TA was developed as part of the study.The steps were determined through consultation with experts (e.g., behavior analysts, local public health program leaders, former fellows) and the existing literature (e.g., Bailey & Burch, 2016;White et al., 2007).The TA was also tested to ensure it was conceptually systematic.
Steps were scored as correct, incorrect, or omission, with omissions being graphically displayed the same as incorrect.
Step 5 could also be scored as not applicable.Table 2 displays operational definitions for scoring at each step.We calculated a percentage correct for each trial by dividing the number of steps completed correctly by the number of total steps and multiplying by 100 steps.Mastery criterion was set at two consecutive trials at 100% steps completed correctly.

Interobserver agreement and procedural integrity
Interobserver agreement (IOA) was calculated for 39.2% of trials, with a minimum of 33% of randomly selected trials in each condition scored by a secondary observer watching videos of participant trials.An agreement was scored if both observers scored the step of the TA as the same.A disagreement was scored if the observers did not score the step of the TA the same.Agreement was calculated on a step-by-step basis by dividing the number of agreements by the total number of agreements plus disagreements and multiplying by 100.Average IOA across all participants was 88.6% (range, 80-94.3%).
An independent observer collected procedural integrity data on the entirety of the training to assess the degree to which BST was implemented as described.This observer watched the video recording of the training and scored whether each component of BST was implemented correctly.Procedural integrity was calculated by dividing the number of components implemented correctly by the total number of components and multiplying by 100.Overall, 100% of the steps were implemented correctly.

Experimental design and procedure
A modified nonconcurrent multiple-baseline design across participants was used to assess the effects of a group training using BST on setting a boundary.This design, which is essentially an AB design with modified baseline lengths and is used for purely practical reasons, allowed for assessment of experimental control while operating within an applied context (Gast et al., 2018).This report from the field was a collaborative project operating within an existing training program, and this design allowed us the flexibility to complete the training to multiple participants within time constraints.The analysis consisted of three conditions: (a) baseline, (b) post-training, and (c) feedback.Follow-up data were collected between 1 and 5 weeks after training, which varied due to participant availability.

Baseline
The purpose of this condition was to assess the percentage of boundary setting steps completed correctly before training.Each participant completed these trials individually, while the remaining participants were in a Zoom breakout room.A new trial began each time the primary trainer presented a new video scenario to the participant.A trial concluded when a participant did not provide a vocal response for 5 s or indicated that they did not have anything else to say.Participants were instructed to respond to each situation as they best saw fit and did not receive feedback or supplemental information if they asked questions.Trials were recorded and scored based on the video.

Training
The purpose of the group training was to teach participants to set a boundary.For role-play and feedback, participants were randomly organized into a group of three (P1, P3, and P4) and one group of two (P2 and P5) using the Zoom randomization feature and placed into breakout rooms with peer-actor trainers.Participants role-played the five scenarios modeled previously.Peer-actor trainers provided corrective feedback for steps completed incorrectly and praise for steps completed correctly.Participants role-played as many times as possible in the approximately 30 min allotted for component, rotating those who participated after each role-play.Participants were also required to demonstrate at least one role-play to the primary trainer before moving to post-training to receive additional feedback, as necessary.

Post-training
The purpose of this condition was to assess the percentage of boundarysetting steps participants correctly implemented after training.Participants experienced post-training trials individually in a breakout room.Procedures were the same as baseline except that the scenarios were different.Data were collected in vivo by the primary trainer within the Zoom meeting to assess whether participants met the mastery criterion.

Experimenter feedback
The purpose of this phase was to provide additional feedback to participants whose performance did not meet the mastery criterion during post-training.In this phase, data were collected in vivo and participants received feedback individually if they completed two consecutive trials at less than 100% of the steps completed correctly.Participants received corrective feedback for the steps completed incorrectly and praise for the steps implemented correctly in the previous two trials in post-training.This phase ended once participants met the mastery criterion.However, one participant (Participant 5) finished this phase before meeting the mastery criterion as there were no more new scenarios prepared to which they could respond.

Generalization probes
We tested participants' generalization of setting a boundary on two untrained tasks: (a) a coworker, rather than a community member, crossing a boundary and (b) a supervisor crossing a boundary.Single-trial probes were conducted at both baseline and post-training for each scenario.The procedure for conducting the generalization probes was similar to baseline.All participants responded to these same two scenarios.The generalization probe did not include an assessment of the performance for Step 5.

Follow-up
The purpose of this phase was to assess the extent to which correct boundary setting was maintained between 1 and 5 weeks after training.
During post-training, the mean percentage correct across all participants increased slightly to 66.8% (range, 25-100%) and participants did not meet the mastery criterion.The error analysis of post-training trials indicates that step 2 remained the most commonly omitted or incorrectly implemented step across participants.Because the mastery criterion was not met in post-training, all participants received experimenter feedback.Four of five participants met the mastery criterion in this condition in two to four trials.P5 never met the mastery criterion during experimenter feedback; however, there is an increase in performance during this condition (M = 89.29%,range, 50-100%).The error analysis indicates step 4 was the most common step to be implemented incorrectly or omitted during experimenter feedback.
Performance maintained at 1-week follow-up for three of four participants.For P5, who experienced their first follow-up at 2 weeks, performance did not maintain.Performance maintained at 5-week follow-up for one of the three participants who experienced follow-up probes at this time.The error analysis indicates that steps 2 and 4 were the most common steps to be omitted or implemented incorrectly at 1-and 2-week follow-up and step 2 at 5-week follow-up.
Two of four participants' boundary-setting behavior generalized to a novel coworker and supervisor scenario.For one participant, P2, performance on the generalization probes maintained at 1-week follow-up.P4 did not experience generalization probes during baseline; this participant asked to end baseline procedures early as they noted they began to feel uncomfortable.The error analysis indicates that step 2 was the most commonly omitted or implemented incorrectly step on generalization probes across all participants.
Finally, results of the social validity questionnaire indicate that on average, participants agreed to strongly agreed with the statements measuring the acceptability of the intervention (Supplemental Table S2).All five participants identified role-plays as the most important aspect of the intervention.One participant indicated the instructions were the least useful aspect.

Discussion
The purpose of this report from the field was to examine the effectiveness of a remote, group BST for teaching a novel skill -boundary setting -to peer support workers within the context of a peer-support workforce-development program.Post-training data revealed participants did not meet criterion and therefore needed supplemental experimenter feedback.Upon introduction of the feedback, four of the five participants reached mastery criterion, which was maintained at 1-week follow-up probes for three participants and 5-week follow-up probes for one participant.Performance generalized for two of the five participants.Results of a social validity questionnaire revealed participants found the training acceptable.Taken together, these findings provide preliminary evidence of the acceptability and effectiveness of group-based BST supplemented by feedback to teach appropriate boundary settings in an applied context, which maintained over a brief period and generalized for some participants.
Although BST has been successful at teaching a wide range of skills to varying populations (Parsons et al., 2012;Miltenberger, 2016), it has not been used to teach ethical skills, an area of formal training that is often overlooked (Lerman et al., 2015).This study extends the ethics literature by teaching trainees specifically how to set a clear boundary when faced with an immediate boundary crossing.This contribution is important as it addresses one of the most important and significant ethics questions; namely, what to do in the moment to avoid unethical behavior (Bailey, 2021).Moreover, this study extends the evaluation of boundary training beyond surveys or choosing a response based on a vignette and is the first boundary training study, to our knowledge, to measure a trainee's actual response to a contrived boundary crossing, which is a limitation to the extant boundary training literature identified by previous researchers (Davidson, 2005;Kunaparaju et al., 2018).
Next, this training extends the applicability of BST to a new topic and population.Specifically, this study demonstrates that BST, with supplemental feedback, can be used to teach an ethical skill (i.e., boundary setting) to peers in a way that is acceptable to training participants.The findings also demonstrated that BST is a useful training procedure to use while teaching behavioral health peer support workers, a population with which BST has not been used.
Finally, a strength of this study is its collaborative and participatory nature.The study was developed using focus groups with those who do the work and included those most affected by the issue (i.e., peers) in the training.This collaboration is important as it ensured that the training covered an important and a needed topic and that components of the training were relevant (e.g., example scenarios).This study demonstrated a participatory approach to behavioral research by including those most affected in determining the training topic, developing training materials (e.g., videos and scenarios), facilitating the training, and sharing and discussing results with participants after the training.
Importantly, this study also provides an initial step in developing empirical ethics training from which future research can learn and build upon.Although the BST training was found to be acceptable and participants improved responding after additional feedback, the findings were not robust.There may be several reasons for this outcome.First, a competency-based approach was not adopted due to the applied nature of the work.Results may be more immediate and clearer if participants role-played and received feedback until a mastery criterion was met before moving to post-training trials.Future research should address this issue.Second, training-appropriate responses to boundary crossings and ethical training in general provides several unique challenges.One challenge is that ethics may involve "gray areas," ambiguity in correct responding, or ethical dilemmas that are no-win situations (Graber & O'brien, 2019).Thus, the dependent variable is complex and there is not one scripted response for all situations, but rather many topographies of responses that may fall under a general response class.Additionally, there are conceptually two general responses involved in every scenario: (a) discriminating that a boundary crossing has occurred, and (b) responding appropriately to the boundary crossing.Teaching people to notice an ethics violation (i.e., a boundary crossing has taken place) was not addressed in this study and is an important direction for future ethics research (Bailey, 2021).Another challenge is that boundary crossings happen in places of a power imbalance (Kendall et al., 2011), which may make it harder to set a boundary.A final challenge is that ethical scenarios may lead to an "emotional response" (Fronek et al., 2009;Fronek & Kendall, 2017).That is, ethical issues may exhibit stimulus properties that are similar to prior aversive events in people's lives and therefore elicit emotional responses and occasion escape behavior.Taken together, these issues may make boundary crossings and training ethics in general more difficult than training other work skills.However, there remains a dearth of research in ethics and therefore opportunities for researchers to explore ethical discrimination training, improve upon methods to teach how to immediately respond to an ethics violation, and assess generalization.
There are several limitations to this report from the field.First, we were not able to collect measures in the work setting due to pandemic-related restrictions and confidentiality issues.The findings would be stronger if follow-up data were collected while peers were working with community members as doing so would demonstrate generalization to the natural work environment.Next, as noted previously, participants were not required to meet a mastery criterion and the training did not ensure participants could discriminate a boundary crossing.Finally, a modified nonconcurrent multiple baseline design across participants was used as it was the most feasible approach due to previously noted logistical concerns.However, demonstrating BST is acceptable, effective, and possible to implement within the confines of already occurring training parameters is important to ensuring its widespread use.
The training began with an introduction to boundaries and a rationale for the training. 1Next, the trainer provided detailed written (using PowerPoint slides) and vocal instructions on how to respond to a boundary-crossing.The slides provided instructions on what to do, gave examples of what to say, and listed common mistakes.The instruction component lasted 11 min and 31 sec.Participants then viewed the five training scenarios being both correctly and incorrectly responded to during the modeling component.Participants discussed what steps the models had implemented correctly and incorrectly after each video.The modeling component lasted 33 min and 54 sec.
For each item, a higher score represented greater participant acceptability.Additional open-ended questions asked participants what steps of the boundary-setting TA they found most and least useful, what aspects of the training they found most and least useful, and to provide any other feedback.Participants anonymously completed this questionnaire via REDCap.

Figure 1 .
Figure 1.Results for all participants.Note.Percentage of steps completed correctly in the task analysis.Along the left y-axis is percent correct, along the right y-axis are boundary steps implemented by participants, and along the x-axis is the trial number.Shaded boxes indicate that the corresponding step in the task analysis was completed correctly, open boxes indicate that the corresponding step in the TA was completed incorrectly or omitted, and boxes with diagonal stripes indicate that the corresponding step in the TA was not applicable.BST = Behavioral Skills Training PT = Post-training; EFB = Experimenter Feedback.* indicates participant complied with unethical request † indicates a trial in which participants did not receive feedback due to technological error

Table 2 .
Boundary setting component checklist.1. Peer orients to the community member Correct: Orients toward the camera; maintains a neutral or positive facial expression, may nod head and engage in approving sounds (e.g., "uh huh") Incorrect: (1) Reacts with an emotional face (e.g., surprised, disgusted), or (2) interrupts 2. Peer restates the community member's need or situation Correct: Describes the community member's needs, wants, or situation (e.g., "What you are saying is . . ." " I understand . . ." "That must be difficult . . .") Makes an excuse or gives an ambiguous response, or (2) complies with the request Omission: Does not emit a "no" response class 4. Peer redirects community member to an appropriate resource or another topic Correct: (1) Describes an appropriate resource or another way to meet need or resolve situation (e.g., "Here is what I can do . . ."), or (2) begins discussing other topics on which to provide support Incorrect: Describes an inappropriate resource Omission: Does not redirect to another resource or topic 5. Peer repeats Steps 3 and 4 Correct: Sets the boundary again and redirects to resources or another topic Incorrect: (1) Begins to make an excuse or give ambiguous response, or (2) complies with the request, or (3) completes only one of Steps 3 or 4 N/A: No opportunity to set the boundary for a second time on the trial