We followed the Standards for Reporting Implementation Studies [89] (StaRI; see Additional file 1) for describing our project. All procedures were reviewed by the RAND Corporation Institutional Review Board and determined to not constitute human subjects research (Protocol #2020-N0607); nevertheless, we will follow all ethical principles for the protection of human research participants to minimize any risk of harm.

Research design

Figure 1 summarizes our approach to developing (Aim 1) and evaluating (Aim 2) the fiscal mapping process. These aims have distinct designs, but will be completed concurrently over a 2-year period and inform each other throughout. Overall, we will use a mixed-methods [90] approach that examines the convergence between qualitative and quantitative data to provide an in-depth understanding of the fiscal mapping process.

Fig. 1
figure1

Overview of the research design for developing and evaluating the fiscal mapping process

Aim 1 is to develop the fiscal mapping process by adapting the intervention mapping process [74] and incorporating our compilation of financing strategies [69]. We will use a modified Delphi technique [91] to obtain formative stakeholder feedback. Delphi is a structured approach to group decision-making, and previous research has established its use for developing consensus about implementation strategies [66]. Sub-aims are to (1a) achieve consensus among our participants—through two web-based survey rounds followed by a round of live, virtual voting—on the key steps of the fiscal mapping process, while (1b) incorporating additional information into the financing strategy compilation to more fully inform strategy selection.

Aim 2 is to evaluate the preliminary impact of the fiscal mapping process. Our 2-year timeline is too short to observe sustainment trajectories, so we will instead focus on short-term factors related to EBT sustainment. Specifically, we will (2a) examine EBT sustainment capacities (e.g., for strategic planning) [29, 56] and outcomes (e.g., intentions to sustain) at the ten pilot-testing service agencies using a comparative case study approach [92, 93]. Each agency that pilot-tests the fiscal mapping process will be considered a case, and we will draw on multiple data sources (i.e., surveys, focus groups, document review, field notes) to compare and contrast experiences across agencies. Following pilot-testing, participants will contribute to a conceptual model of fiscal mapping’s process and outcomes through a participatory modeling exercise [94].

Project timeline

The project began in February 2021, focusing first on recruitment and developing the initial fiscal mapping process prototype. When we completed this protocol in October 2021, we had finished recruitment and were conducting initial training with participating agencies; pilot-testing and data collection will take place over the subsequent 12 months. We will iteratively analyze data and incorporate it into the fiscal mapping process throughout pilot-testing, with the goal of finalizing the tool by the end of the project period (January 2023). This timeline coincides with the COVID-19 pandemic, but all project activities were planned to be conducted virtually which helped to minimize disruption.

Participant and site recruitment

We have recruited 48 expert stakeholder participants, representing key roles in US youth mental health services, and we will engage them in all phases of the project. Stakeholder involvement is critical to producing research evidence relevant to those who deliver and fund EBTs [95,96,97,98]. Our recruitment plan is grounded in comparative case study methods [92, 93], using rigorous sampling to maximize the representativeness of small samples when random sampling is not feasible or effective [99, 100]. The cases are the ten service agencies, each represented by service agency representatives and their EBT intermediary and funding agency partners. Experts recommend recruiting approximately ten cases for subtle between-case comparisons [93], and representing multiple perspectives from each case [92]. For our sample, each agency will contribute up to 3 participants per stakeholder group. The resulting sample will allow us to use a variety of research methods, including—but also well beyond—case study methods.

EBT intermediary representatives

To begin recruitment, members of the research team nominated EBT intermediary organization representatives with expertise in the high-fidelity implementation and sustainment of PCIT or TF-CBT. We met our goal of recruiting 12 intermediary representatives; of the 12 enrolled intermediaries, five had expertise in PCIT, four in TF-CBT, and three in both models.

Youth mental health service agencies (cases)

Using snowball sampling [99, 100], intermediary representatives nominated service agencies with whom they worked to implement PCIT or TF-CBT in the past 5 years. We invited those agencies to apply to join the project and enclosed a detailed information guide about the project with each invitation. Our nomination and application process collected detailed quantitative and qualitative data about each agency from three stakeholder groups, which is ideal for rigorous case selection [100, 101] and will guide later comparative case study analyses. We received 45 service agency nominations and an additional six referrals from the nominated agencies, for a total of 51 nominees.

We found that youth mental health service agencies benefitted from technical support prior to their submitting an application. The principal investigator often met with agency representatives to engage them in the project and discuss key decisions, such as which service agency representatives should participate or which EBT would most benefit from the fiscal mapping process. We used purposive sampling [93, 99, 100] to prioritize cases for recruitment that provided a representative range of agencies, allowing for useful comparisons within and across our two EBT models of interest while providing adequate representation of service agency and funding agency participants. We recruited cases based on important characteristics of EBTs (e.g., use of PCIT vs. TF-CBT vs. both, use with racial/ethnic minority and low-income populations), agencies (e.g., type of agency, rural/urban service area, size), and funding contexts (e.g., state/region, service-funding agency partnerships). To ensure a clear focus on sustainment, agencies were required to have fully implemented the EBT of focus with at least one clinician.

In the application, agencies also contributed to our snowball sampling recruitment approach by nominating stakeholders involved in their EBT sustainment efforts to participate, including representatives from the service agency and from partner funding agencies. We then followed up with nominated individuals to verify their interest in participating (prior to finalizing an agency’s selection) and to gather demographic information.

Ultimately, 12 agencies submitted applications to join the project, of which ten 10 were selected to pilot-test the fiscal mapping process. Four of the participating agencies chose to focus on PCIT for the fiscal mapping the pilot test and the other six to focus on TF-CBT. The two agencies that applied but were not selected both had difficulty identifying service agency and/or funding agency representatives with the capacity to participate in the project (i.e., nominees from the application did not follow through with enrollment).

Youth mental health service agency representatives

Service agencies nominated personnel who had expertise and oversight regarding the financial aspects of EBT implementation and sustainment at the agency. We sought to recruit at least 18 service agency representatives; we found that nominated representatives were typically willing to participate once their service agency had committed and ultimately enrolled 24 service agency representatives. Most were in an agency leadership role (e.g., CEO, Chief Financial Officer, Vice President), a clinical administration role (e.g., clinical director, program supervisor), and/or a financial administration role (e.g., grants administration, development officer).

Funding agency representatives

Service agencies also nominated representatives from funding agencies that supported their EBT of focus in the past 5 years. Although we sought to recruit 18 funding agency representatives, service agencies reported it was challenging to identify funders who were willing to participate in this study. For example, some funding agencies had policies that precluded staff participation in research. Therefore, we concluded recruitment after enrolling 12 funding agency representatives, as this was equivalent to the number of intermediary participants and (given the higher-than-expected service agency representative enrollment) achieved the overall recruitment goal of 48 participants. The funding agency representatives came from a diverse range of organizations including state and tribal agencies, private foundations, and managed care.

Pilot-testing activities

Pilot testing will provide service agency representatives with hands-on experience that can inform ongoing refinements of the fiscal mapping process. As a supplement to StaRI, here we follow the Template for Intervention Description and Replication [102] (TIDieR; see Additional file 2) when describing the fiscal mapping process and associated activities.

Fiscal mapping process tool

The research team created an initial prototype of the fiscal mapping process (version 1.0) for pilot-testing. The prototype format is an Excel workbook, and it is structured to clearly indicate what information should be entered to complete each step, but also flexible enough to accommodate agencies’ varied strategic planning goals and capture important contextual factors in each step. After specifying the focus of a given fiscal map (EBT, sites, etc.), the user completes the five fiscal mapping process steps: (1) resources needed, (2) funding objectives, (3) financing strategies, (4) fiscal map of EBT, and (5) monitoring plan (see Table 1). A resource tab accompanies each step with other materials useful for completing the step. For example, Step 1 resources include information about EBT time and cost models that help identify resource needs [103] and Step 3 resources summarize the aforementioned compilation of 23 financing strategies for behavioral health [69]. Each resource tab also includes a completed example of the associated step with a hypothetical service agency.

Initial training

We will provide a 3-h virtual training to the representatives from each pilot-testing service agency via Microsoft Teams. The agenda includes (a) introductions and project overview (30 min); (b) step-by-step instructions for using the fiscal mapping process, including ample hands-on discussion about completing the tool’s steps for the service agency (2 h); and (c) plans for coaching calls and data collection activities (30 min); regular breaks are included. We will promote engagement in the training through a practical, applied focus that allows agency representatives to leave training with an in-progress fiscal map and concrete next steps for using the tool. We will video-record each training session and give the service agency representatives access to the recording if desired. Two coaches (the principal investigator and project manager) will lead trainings for 5 agencies each; the other coach will attend to provide technical support and record detailed field notes. Both coaches have training in mental health service delivery (clinical psychologist and social worker, respectively) and EBT implementation.

Monthly coaching

To facilitate the use of the fiscal mapping process, each coach will provide monthly coaching sessions for 1 year with the service agencies for which they led training. Coaching sessions will be brief (~ 15 min per month) and focus on answering the service agency representatives’ practical questions about applying the fiscal mapping process. Prior to each coaching call, the coach will send a structured email inquiry asking representatives to specify (a) which fiscal mapping process steps they have worked on; (b) key areas they wish to prioritize for coaching, such as working toward completion of certain steps or deciding how to share conclusions with stakeholders; and (c) any desired modifications to the session format, like extending the session length or inviting stakeholders to join. The coach will also be available for as-needed consultation outside of the scheduled coaching calls; thus, rather than limiting coaching to 15 min, the use of this brief model provides a sustainable way to maintain monthly coach-agency contact for the duration of pilot-testing. Coaches will record field notes about the frequency, length, modality, and content of each coaching contact in a detailed logbook.

Plans to address adaptation and fidelity

Throughout the pilot-testing year, we will incorporate feedback from the 48 stakeholder participants into refinements of the fiscal mapping process. If there are major changes to the tool (Version 2.0, 3.0, etc.), then we will re-distribute it to participating agencies and provide additional guidance or training as needed. Thus, we will initially prioritize the adaptability of the fiscal mapping process while we incorporate stakeholder perspectives into the tool. Over time, we will develop a fidelity checklist of core fiscal mapping process steps that can be used by coaches as well as guide fidelity assessments for subsequent evaluations of the strategy.

Data collection activities and measures

We will collect a mix of quantitative and qualitative data from the expert stakeholder participants for all project aims (see Fig. 1). Data will be collected using secure web-based programs: SelectSurvey for surveys and Microsoft Teams or Zoom.gov video-conference for the focus groups, webinar, and training/coaching activities. We will not collect personally identifiable information; participants will assign each participant a unique, anonymous identification number to identify their data. Table 2 provides a summary of each data collection activity, including the timeframe, measures used, participants involved, compensation amount, and relevant aims.

Table 2 Data collection activities for developing and evaluating the fiscal mapping process

Surveys

The modified Delphi [91] (Aim 1a) will begin with two rounds of feedback on the fiscal mapping process via online surveys administered 6 months apart. Each online survey will provide (a) a detailed description of each step of fiscal mapping; (b) a text box for comments, concerns, or proposed changes to each description; and (c) a text box to offer additional or alternative steps for the fiscal mapping process.

We will also incorporate feedback into the compilation of financing strategies [69] (Aim 1b) through two follow-up surveys (one in each of the first two Delphi rounds). Service agency representatives will provide additional information about their agencies in these follow-up surveys to provide context for the feedback. In the first survey, the expert participants will review the compilation and provide (a) quantitative ratings of each strategy’s relevance to youth mental health services, (b) qualitative feedback on each strategy, and (c) suggestions for additional financing strategies. Service agency representatives will provide ratings, using validated scales, of the agency’s implementation climate (Implementation Climate Scale [105]) and financial status for EBT implementation (Agency Financial Status Scales [104]). In the second survey, participants will provide ratings of each strategy’s availability in their funding environment, level of suitability for funding different implementation activities, feasibility, and effectiveness. Service agency representatives will also rate each strategy’s contribution to their funding for EBT sustainment (percentage of total funding over the last 3 years).

Each survey (Delphi + follow-up) is expected to take approximately 30 min. Participants will receive a $30 electronic gift card for each completed survey.

Focus groups

About 3 months after each survey, we will conduct a virtual focus group with each service agency. A given focus group will include one service agency’s representatives; the funding agency representative(s) nominated by the service agency; and an intermediary with expertise in the EBT of focus for pilot-testing (ideally, but not necessarily, the intermediary who nominated the agency). During the focus group, participants will discuss the service agency’s experience with pilot-testing the fiscal mapping process and how using the tool has impacted EBT sustainment capacities (from the Public Health Sustainability Framework [29]; especially financial stability and strategic planning) and outcomes. The groups will also discuss key characteristics of the EBT, agency, and funding context that influence the fiscal mapping process. The coach who does not conduct the agency’s coaching sessions will lead their focus group (to avoid demand effects). Focus groups will be supported by a research assistant who will take detailed notes, and will be audio-recorded for later analysis.

Each focus group is expected to take approximately 1 h, and participants will receive a $50 electronic gift card as compensation. Afterwards, participants will complete a brief web-based survey rating (a) the agency’s capacity for sustaining the chosen EBT using the Program Sustainability Assessment Tool, a measure of Public Health Sustainability Framework domains [56]; (b) extent of EBT sustainment using the three-item Provider REport of Sustainment Scale [106]; and for service agency representatives only (c) intentions to sustain the EBT over the next year. The focus group audio-recordings will be transcribed, with any identifying information removed, and destroyed once the analysis is complete.

Document review

To provide additional insights into the use of the fiscal mapping process, we will also collect and review relevant documents, such as agencies’ draft or final fiscal mapping process tools or information obtained from EBT intermediary and funding agency partners that informed completion of the tool. This method can provide useful insights into complex systems-level processes when interpreted alongside other qualitative and quantitative data [107]. We will identify relevant documents during the focus group discussions and coordinate with service agencies to support sharing as much as they are comfortable (establishing data use agreements and secure file transfers as needed).

Webinar: consensus voting and participatory modeling

At the end of pilot-testing, we will invite all 48 participants to participate in a 2-h webinar. Two data collection activities will be completed during the webinar: consensus voting for the final Delphi round (Aim 1a) and a participatory modeling exercise (Aim 2b). The two fiscal mapping process coaches will serve as facilitators.

The final Delphi round will be a live voting and consensus process. The facilitators will present each step of the process for voting, with associated comments and alternative specifications (if applicable). We will use the US Senate benchmark for a supermajority to end debate (≥ 60%) [108] for indicating consensus, as in a prior Delphi for implementation strategies [66]. We will attempt to identify consensus on a step using approval votes (i.e., for all acceptable options) before moving on to “run-off” voting, as this is the most efficient and “sincere” (i.e., strategy-proof) form of voting [109]. If consensus is not reached after runoff voting, the original description of the step will be retained. Throughout voting, participants can make comments in the chat or virtually “raise their hand” to make verbal comments for 1 min at a time. We will keep a record of the webinar polls used to count votes.

In the second portion of the webinar, participants will complete a participatory modeling exercise in which they conceptualize the process and outcomes of fiscal mapping. Participatory modeling is a technique from systems science that guides a group of stakeholders through the creation of a conceptual model of systems structures [94]. The facilitators will guide participants’ identification of actors, activities, outcomes, and contextual factors involved in each step of the process and solicit ideas for how to best evaluate changes in these factors. We will use the whiteboard function to illustrate the participants’ conceptual model in real-time as the discussion proceeds. To help make the discussion more engaging, we will solicit feedback through diverse channels including webinar polls, word clouds, chat box (including an anonymous option), and annotation on the whiteboard.

We will video-record the entire webinar to allow for a detailed record of the activities. The recording will be destroyed once the analysis is complete. We expect the entire webinar will take approximately 2 h, and attendees will each receive a $100 electronic gift card.

Field notes

As noted previously, coaches will log detailed field notes during training and coaching activities. In addition to being useful for the coaching process, these notes can be analyzed later for research purposes. The content of field notes will be most relevant for capturing service agency feedback on the fiscal mapping process (Aim 1b) and offering another source of insights into agencies’ experiences with the process and its outcomes (Aim 2a).

Analysis plan

Our analytic approach is grounded in mixed methods, which is standard practice for implementation research [90]. Mixed methods involve combining quantitative data (Delphi votes, standardized scales) and qualitative data (e.g., focus group notes and transcripts, open-ended survey responses, document review, field notes) to gain higher-level insights that would not be possible through the use of either approach in isolation.

Initial data processing

We will calculate descriptive statistics for quantitative measures. For qualitative data, we will use rapid content analysis [110, 111] to distill major themes from a given data source. Rapid content analysis is ideal for synthesizing actionable conclusions from qualitative data to inform implementation activities, and it can be applied to a variety of written data sources (including documents and logs [107]). Qualitative themes will be critical for interpretation, given that our small sample precludes complex quantitative analyses. We will also calculate internal consistency reliability for each scale and compare quantitative and qualitative results as a validity check.

Aim 1: development

We will organize the quantitative and qualitative survey data (from Aims 1a and 1b) into response matrices, which will guide team discussions about how to incorporate stakeholder feedback into the fiscal mapping process. The matrices represent a mixed-methods convergence function [90], where cells will summarize the overlap between qualitative and quantitative feedback across different dimensions (e.g., EBT models, stakeholder types) to help identify key priorities. For example, we might make refinements to the prototype by adding, removing, or refining the steps; we might also incorporate additional resources, including summaries of survey ratings on the financing strategy compilation. Ultimately, we will produce a well-specified fiscal mapping process with consensus on the key steps involved [91, 108].

Aim 2: evaluation

Our evaluation will primarily rely on the comparative case study approach [92, 93], synthesizing all available quantitative and qualitative data for in-depth insights into each case (i.e., youth mental health service agency that pilot-tested the fiscal mapping process). This approach involves creating descriptive summaries of the role of the fiscal mapping process in EBT sustainment capacities and outcomes at each agency, clearly identifying the contributions of different qualitative and quantitative measures to the conclusions drawn. We will then compare and contrast the ten pilot-testing agencies based on the key characteristics in the sampling plan. At various points in the analysis, a given pair of agencies may be grouped together or contrasted, depending on the characteristic being considered. We will also consider differences in perspective among the three stakeholder participant groups (service agency, funding agency, intermediary). Statistical power is limited, but we will examine if quantitative data follow expected contrasts and patterns over time, such as more effective use of fiscal mapping at agencies with higher and/or increasing strategic planning capacities. We will heavily leverage qualitative data to ensure accurate interpretation and maximize depth of understanding.

To complement our comparative case studies, we will analyze the participatory modeling exercise results to create an overarching conceptual model of the fiscal mapping process that can guide future evaluation. Following the webinar, the project team will review the exercise results and create a system dynamics diagram [94] representing the conceptual model that the participants generated. The system dynamics diagram will specify actors, activities, outcomes, and contextual factors for each step of the fiscal mapping process, providing a visual representation of the complex interactions and feedback loops involved in EBT financing decisions. For specified outcomes of each step, we will also note key indicators for evaluating success. Finally, we will use the conceptual model to expand the Public Health Sustainability Framework [29]—which describes key capacity domains but is silent on how to evaluate their impact—so that the framework can guide prospective evaluations of the fiscal mapping process and other approaches targeting EBT sustainment.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Disclaimer:

This article is autogenerated using RSS feeds and has not been created or edited by OA JF.

Click here for Source link (https://www.biomedcentral.com/)