Mixed methods analyses will be used to evaluate the following three aims:
To compare the effectiveness of two implementation strategies (LEAP QI Learning Program + AD vs. AD alone) on potentially inappropriate medication use, using a pooled analysis of effects across the three trials at 18 months, 2 years, and 3 years post-baseline at the clinic-level, based on monthly assessed data from 13–36 months;
To compare the effectiveness of the two implementation strategies on secondary outcomes specific to each trial at 18 months, 2 years, and 3 years post-baseline, based on monthly assessed data from 13 to 36 months; and
To explore the effects of implementation, provider behaviors and experiences, and context, on sustained improvements in potentially inappropriate medication use.
For the purposes of pooled analysis across the three trials, an analogous dichotomous outcome will be identified for each trial, reflecting the proportion of patients with potentially inappropriate medication use. This will effectively triple the number of clinics (8 clinics per trial; 24 total clinics) included in the analysis of Aim 1. Additionally, each trial will be analyzed as a standalone study. All three trials will have distinct secondary outcomes.
Our aims are designed to deepen commitment  to sustain EBP use by including measures that matter to different key constituencies including employees (e.g., workgroup functioning, job satisfaction), health system leaders (increased use of EBPs), and patients (e.g., reduction in PIMs) [6, 7, 45]. The combination of implementation strategies with measures that matter is designed to empower teams and individuals to increase meaning and purpose of their work, focused on the health and well-being of the Veterans we serve. Evaluation results will provide guidance as to which implementation strategy is more likely to lead to sustained outcomes.
Human subjects protection
The MIDAS QUERI trials qualify as non-research conducted under the authority of Veterans Health Administration (VHA) operations, as it was designed and implemented for internal VHA purposes (to improve patient care) and not to produce information to expand the knowledge base of a scientific discipline.
In response to the designation of broad categories of activities as non-research in the Federal Policy for the Protection of Human Subjects (Common Rule) in Title 38 Code of Federal Regulations Part 16 (38 CFR 16.102(l)) published January 19, 2017, the VHA enacted new policies and guidelines for determining non-research activities within VHA. In accordance with these VHA policies and guidelines, this program has documentation as non-research by Pharmacy Benefits Management, Office of Mental Health and Suicide Prevention, and Veterans Integrated Service Network (VISN) 10, which are each authorized to deem projects as non-research activities for which formal IRB oversight is not required, as defined per VHA Handbook 1058.05 in the section “Officials Authorized to Provide Documentation of VHA Program Office Non-Research Operations Activities” and later updated in section 5a of the VHA Program Guide 1200.21.
This program is designed as a concurrent nested mixed methods evaluation [46, 47] in the context of cluster-randomized trials that will evaluate the effectiveness of AD+LEAP over AD as implementation approaches to improve the use of EBPs across three trials (See Additional file 1 for the SPIRIT checklist). Each trial will launch in quarterly increments over a 9-month period, each enrolling 8–12 clinics, randomized to one of the two implementation arms. For each trial, AD and AD+LEAP intervention activities will take place over a period of up to 12 months. Because our focus is on sustainment of improvements in clinical measures for each EBP, administrative data on key outcomes will be obtained over 36 months with a focus on comparisons at 18-month and 2- and 3-year post-randomization follow-ups.
When research aims align with clinical priorities articulated by health system leaders, the likelihood of greater benefit can be dramatically amplified . This program was developed in partnership with key offices within VHA, including Pharmacy Benefits Management, Office of Mental Health Services and Suicide Prevention (OMHSP), and executive leaders in two VISNs. We have worked closely with Pharmacy Benefits Management’s VA MedSAFE program and Academic Detailing Service. Our multi-faceted metrics are designed to deepen commitment to sustain the EBPs  and, in turn, to institutionalize them .
As described above, all participating clinics will have access to regularly updated data from dashboards or similar resources. The dashboards provide clear, simple descriptions of care practices (e.g., patients at elevated risk of polypharmacy), thereby allowing easy identification of care variances to help detailers, individual providers, and clinic-level leadership identify opportunities for improvement . In VHA, advances in medical informatics in design and content have produced increasingly user-friendly, responsive, and actionable dashboards, which have helped to amplify the work of clinicians and academic detailing pharmacists in invaluable ways. Our team is currently conducting a scoping review that will provide deeper knowledge of factors affecting uptake and effectiveness of dashboards . Well-designed dashboards promote data-driven care optimization for individual care and population health management. All arms of care and outcomes of the trials will align with the clinical dashboards or a similar resource (e.g., the VIONE trial will rely on a VIONE practice dashboard and our measures will replicate those reported through the dashboard). Two implementation approaches that each use a dashboard or similar resource are described in the next sections (see Additional file 2 for StaRI Reporting Checklist details).
Our AD intervention is modeled on existing AD principles. AD is a direct educational outreach of face-to-face (and more recently, virtual [51, 52]) interactions between academic detailers and clinicians that incorporate principles of adult learning theories, theory of planned behavior, and social marketing to improve the use of EBPs . Using an accurate, up-to-date synthesis of the best clinical evidence in an engaging format, academic detailers ignite clinician behavior change, ultimately improving patient health. Evidence syntheses reveal that AD alone can be effective [54,55,56,57]; however, AD combined with other approaches (e.g., audit and feedback) is most effective in changing prescribing practices .
We will create an AD program that can be used and adapted for each intervention. The program will include a generalized approach based on existing recommendations, including documentation and training that will be general to the program and specific to each trial. For each trial, we will work with content experts and operational partners to develop 4–6 key messages that will be tightly linked with the primary outcome of the trial and the data in the respective clinical dashboard or resource. We will create detailer- and provider-focused educational materials to guide the detailer’s conversations with providers. This will include sample conversational scripts and tools to integrate detailing messages with provider-specific care patterns from the data.
Our detailers will be hired specifically for this project. They will attend training with the National Resource Center for Academic Detailing (NaRCAD) and through the VA Office of Academic Detailing. They will receive trial-specific education from each trial’s principal investigator and relevant content experts. They will shadow current detailers and will role-play the detailing sessions to practice conveying the key messages both internally and with non-participating practitioners. The sessions’ framing is based on the Theory of Planned Behavior [59, 60] and motivational interviewing .
The specific content of each visit will be tailored to address the specific context (barriers) identified at each participating clinic and for that specific provider. The detailer will start with an initial virtual visit with providers and other key staff at participating clinics; the detailer will meet with providers at each clinic for 15–30 min each. The detailer will review the dashboard in preparation for each visit to identify gaps and opportunities for improvement. A second, virtual visit will be completed four to eight weeks later to follow up with each participating provider.
For all three EBPs, we will identify provider-level barriers, including lack of knowledge about the EBP or uncertainty about the value of the practice . Our AD strategy is designed to address these gaps by supporting individual providers in both use of the EBP and the use of clinical data to guide the practice.
Our AD approach will also include identifying a local champion prior to the first clinic visit. A “train-the-trainer” approach will be used to help ensure activities continue over the long term. The level and nature of champion engagement will be collaboratively determined by need and availability. Ideally, a local champion will shadow our detailer during visits and will be provided with training resources and coaching. Our detailer will develop a plan with the local champion to continue to reach out to new providers as appropriate to more deeply embed and sustain the practice and to track EBP use with the relevant data resource.
The Learn. Engage. Act. Process. (LEAP) program
Through prior work, we have identified common barriers encountered when implementing EBPs. This work, guided by the Consolidated Framework for Implementation Research (CFIR), has repeatedly identified a lack of planning, not consistently engaging key stakeholders, and not taking time for reflecting and evaluating on progress and impact in EBP implementation efforts [63,64,65]. LEAP, a blended implementation approach , is specifically designed to address these barriers by interweaving four discrete, evidence-based implementation strategies: (1) create a learning collaborative, (2) assess for readiness and identify barriers and facilitators, (3) audit and provide feedback, and (4) conduct cyclical small tests of change [67, 68].
The LEAP QI program engages frontline teams in sustained incremental improvements of EBPs over a 6-month period of hands-on learning, designed for busy clinicians as listed in Fig. 1. The Institute for Healthcare Improvement’s (IHI) Model for Improvement and PDSA cycles of change provides the core foundational approach  for team-based, hands-on learning, and coaching support with a QI network to enhance learning and accountability.
The LEAP curriculum was adapted from a Massive-Open Online Course (MOOC) developed by HarvardX in collaboration with IHI . Materials from the MOOC were adapted for LEAP by (1) designing for teams rather than individuals, (2) streamlining materials to accommodate busy frontline clinicians, and (3) lengthening program duration to provide more time to complete an improvement project. The LEAP curriculum includes brief videos, short readings, and easy-to-understand templates and tools, using selected content developed by IHI and HarvardX. The curriculum is paced, with new guidance released on a weekly basis through an online platform (SharePoint Online). Assignments completed in LEAP (i.e., project charter) can be drawn on for continued future improvement efforts. Continuing education (CEs) are available through VA’s Talent Management System (TMS).
Each clinic participating in LEAP forms a QI team. In our cluster randomized design (described below), teams will participate in cohorts of 4-6 to create a learning collaborative. LEAP coaches interact with teams in individual webinar sessions in the early weeks of LEAP and later via virtual collaboratives with all teams. LEAP teams choose aims, plan projects, and monitor data to bring about meaningful changes based on the specific needs surrounding the EBP at hand. The LEAP implementation strategy also includes a 6-month maintenance component, called LEAPOn, that provides monthly collaboratives for teams to encourage continued work on PDSA cycles.
So far, 49 teams have completed LEAP, comprising 276 frontline staff, clinicians, and Veterans. Based on first-year results, LEAP measurably increased confidence in using QI methods, and participants were satisfied or very satisfied (81-89%) with all LEAP components . In addition, 96% agreed or strongly agreed that LEAP was relevant to the needs of their program. Post-LEAP, teams intended to continue to optimize care for their patients; however, participants struggled most with the lack of available time for QI amid competing clinical priorities.
Conceptual framework for evaluation
MIDAS QUERI focuses specifically on the sustained use of EBPs. The Dynamic Sustainability Framework (DSF) asserts that “[o]ngoing quality improvement of interventions is the ultimate aim…[because] evidence solely from clinical trials [is insufficient] and…quality improvement processes focused on intervention optimization are ultimately more relevant to achieve sustainment.”  Sustainment science literature [7, 45] and other implementation science frameworks [72, 73] all affirm the necessity of ongoing optimization. Thus, at the center of the DSF is the need to engage individuals and teams in continual adaptation and optimization through learning cycles like the PDSA cycles foundational in QI . However, clinical teams have significant challenges doing PDSA cycles because of patient care demands and they must navigate constant changes in infrastructure, policies and procedures, and staffing, all of which leave little time for implementing improvements. Nevertheless, if frontline teams do not invest time and effort into making improvements, change will not happen and/or will not be sustained, leading to widespread failures across the system. The LEAP and AD strategies are specifically designed to engage busy frontline employees in continuing incremental optimization of each EBP.
Sustainability research highlights the need to identify outcomes important to multiple stakeholders for change to be fully integrated as routine care [7, 45]. Figure 2 shows our conceptual framework. At the heart, is a positive reinforcing feedback loop between three categories of outcomes, each designed to meet the needs of three key constituencies: (1) employees who deliver treatments; (2) health system leaders; and (3) patients. Our strategies are designed to move individuals and/or teams into a virtuous cycle where engaging in optimization brings visible improvements in work-life (e.g., burnout, satisfaction as measured by “Best Places to Work”) as employees are motivated [74, 75] by seeing measurable improvements in near-term service outcomes that matter to clinical leaders (increased use of EBPs) and patients who experience improved clinical outcomes. Sustained change relies on building ever-stronger coalitions of support that can occur when outcomes are visible and communicated widely. This increased visibility with supervisors and other clinical leaders will help to foster willingness to allow the space and time needed to engage in optimization [4, 7, 8, 45]. Increasing capacity for change, especially through teamwork, is strongly associated with lower burnout among clinicians . We will combine qualitative findings with quantitative measures to help explain changes (or lack thereof) over time. Our AD strategy is based on Theory of Planned Behavior, where attitudes, subjective norms, and perceived behavioral control shape behavioral intentions that lead to engaging in cycles of optimization of personal work processes. The LEAP strategy relies on teaming theory  and engagement in continuous QI  and provides team-based structured coaching as teams learn to plan and execute PDSA optimization cycles.
The effectiveness of our implementation strategies will be moderated by contextual determinants (i.e., barriers and facilitators) influencing teams’ and individuals’ ability to engage in optimization. These contextual determinants will be assessed using a newly developed pragmatic Context Assessment Tool (pCAT; unpublished) that assesses nine constructs across three of the domains of the CFIR (Innovation Characteristics, Outer Setting, and Inner Setting). This prioritized list of constructs was chosen based a series of context assessments during implementation evaluations in VHA [63,64,65]. The COVID-19 pandemic has heightened the awareness of how other unexpected impacts that may also influence this pathway to the positive reinforcing feedback that is designed to keep individuals and teams engaged in optimization.
Clinic selection and eligibility
We will work with our operational partners to identify candidate clinics that want to reduce their use of PIMs based on the topic for each respective trial. We will provide an orientation to the topic, introduction of the dashboard, and overview of the two implementation intervention arms. Prior to implementation, we will work with interested clinics to ensure they have met the preconditions necessary to begin sustained optimization of the EBP: (1) a team leader or champion; (2) an identified department with service leadership buy-in and control over the processes/practices impacted by the implementation; (3) readily accessible data to monitor process and impact of the implementation and use of the EBP, e.g., through an easy-to-access dashboard; and (4) installment of key components needed to support the EBP (e.g., installation of a specific note template in the EHR system). We will recruit four to six clinics per arm per trial; a clinical leader will provide assent to participate and enroll.
Within each trial, clinics will be randomized after assenting to participate (equivalent to enrollment). Clinics will be assigned to one of two arms by a statistician, stratified further by clinic type (medical center, community clinic, or Community Living Center) if needed to ensure partial balance between arms with respect to potential confounders associated with culture and complexities associated with clinic location [79, 80].
Outcomes and analyses
As part of a pooled analysis, we will compare the same two implementation strategies across all three EBPs and take a unified approach to implementation and evaluation across the trials. Table 1 shows MIDAS measures, data collection timeframe, and data sources. While a unified dichotomous outcome, i.e., PIMs, was identified for each trial to allow for the pooled analysis, each trial will also be analyzed individually (see Table 1).
Aim 1: primary outcomes and pooled analysis
Although each trial will be conducted as an independent study, our primary aim is to compare across trials the effectiveness between the two implementation strategy arms in reducing PIMs during post-implementation period. To this end, we defined a unified primary outcome to allow us to combine the results across the three trials. The unified primary outcome will be operationalized based on a patient-level dichotomous response indicating PIM use (yes/no) among patients at-risk of PIMs, i.e., among those who may benefit from the specific EBP each month. The monthly patient-level PIM use response will be summarized to clinic-level month-by-month percentage of potentially inappropriate use using administrative data from baseline to 36 months, with months 13–36 as the post-implementation follow-up period. Each trial-specific monthly data will be cross-sectional, i.e., different patients may be included in each month.
For inappropriate polypharmacy, the clinic-month outcome will be the proportion of patients who had medication possession (based on VA pharmacy fill data) of one or more medications from the AGS Beers criteria that are included on the VIONE PIMs dashboard  (numerator) among patients age 65 or older, not receiving palliative care, and followed by the clinic (denominator). For each drug included on the PIMs dashboard, there are associated business rules that define when medication use is flagged as potentially inappropriate; these same criteria will be applied in this trial. For example, the use of a first- or second-generation anti-psychotic drug is flagged as potentially inappropriate unless there is a diagnosis of schizophrenia or bipolar disorder. These criteria had previously been determined by VIONE’s Subject Matter Expert group, which provides VIONE with guidance on translating deprescribing criteria into the most practical and appropriate rules for use on the dashboard. Altogether, the following AGS Beers medications from the PIMs dashboard will be included in the analysis: anticholinergics, antipsychotics, aspirin, benzodiazepines, long-acting sulfonylureas, muscle relaxants, non-steroidal anti-inflammatory drugs (NSAIDs), proton pump inhibitors (PPIs), sliding scale insulin, and Z-drugs.
For DOAC safety, the outcome will be the proportion of patients with potentially inappropriate prescribing out of those using DOACs, as measured by “flags” (e.g., potential mis-dosing based on renal function and other indicators) on the DOAC dashboard. The DOAC flagging system is based on Food and Drug Administration (FDA) indications and has been in clinical use since 2018. Components of the outcome include inappropriate dosing for the given indication and the use of DOACs in contraindicated settings (such as valve replacements).
For first-line treatment for insomnia, the outcome will be the proportion of patients with a new prescription for a sedative-hypnotic medication who have not had CBTI in the prior 12 months out of all primary care patients actively following with the clinic and are not in hospice/palliative care.
For all three trials, medication use (yes/no) and possession of active prescription for each month will be determined using exposure days based on supply days, and use will be determined by the exposure status on day 1 of each month. We will also do sensitivity analyses based on the criteria of use anytime during the month as well as PIMs defined to medications used chronically, for example, greater than 90 of the 180 prior days.
For each trial we will first compare demographic characteristics (age, sex, and race) of patients at risk of PIMs in the first month of implementation between the two arms. We will then obtain, for each trial by arm, crude monthly percentages (along with the corresponding 95% confidence intervals) of PIMs, averaged across clinics randomized to each arm and weighted by clinic-month size. For each trial, we will plot the monthly clinic level percentages over the follow-up 13–36 months to graphically assess if the difference between the two arms can be meaningfully summarized across the three trials with the unified outcome. If we find, for example, that trends between-arms over post-implementation months differ notably across the three trials, unified results comparing AD+LEAP vs. AD arm across trials may not be meaningful, and we will only conduct analyses separately by each trial.
For comparison between arms, we will use generalized estimating equations (GEE) with clinic-level monthly percent of PIMs among patients at risk during post-implementation period (months 13 to 36) as the dependent variable. The model will include indicators of two trials with one trial as the referent category to account for differing underlying levels of inappropriate medication use across trials. The model will also include follow-up time in months and the LEAP+AD arm indicator with AD as the referent category and will adjust for serial correlation within clinic over time. We will also include time by arm interaction to assess if the magnitude of the difference between LEAP+AD vs. AD changes over time. If the interaction is significant, we will estimate between-arm difference at 18 months as well as at 2- and 3-years separately based on the model with the interaction term. On the other hand, if the interaction is not significant, this would indicate between-arm difference not to differ at the three follow-up times of interest (18, 24, and 36 months), and thus we will drop the interaction term and the parameter estimate of the LEAP+AD arm indicator will be used to estimate the time-averaged difference in percentage of patients with inappropriate medications during the post-implementation period in clinics randomized to LEAP+AD compared to clinics randomized to AD.
If we find notable baseline demographic differences between arms within trials, we will use a generalized linear mixed model (GLMM) with logit link to estimate the between-arm difference while adjusting for baseline age, sex, and race difference with monthly person-level response (yes/no) data from the post-implementation period of months 13 to 36. In addition to time, AD+LEAP indicator, and trial type indicators as predictors, the GLMM model using patient-level data will include patient age, sex, race, and random intercepts for patients nested within clinic to adjust for potential correlation within clinics and serial correlation over time. The parameter estimate for the LEAP+AD arm indicator will be used to estimate the time-averaged odds of inappropriate medication use during the post-implementation period for patients in clinics randomized to LEAP+AD compared to the odds of the same patients if their clinics were randomized to AD. Although the GEE and GLMM models give different summary estimates with different interpretations, the GLMM model allows for adjusting for patient characteristics, and a consistent substantive conclusion will assure us of the evidence for the effect of LEAP when added to AD. Similar to the GEE model, we will test if the odds ratio of LEAP+AD vs. AD changes over time by including time by arm interaction term, and if the interaction term is significant, we will obtain adjusted odds ratios associated with LEAP+AD compared to AD at 18 months, 2 years, and 3 years.
For each trial, we will also compare AD and AD+LEAP to usual care controls. To do this, we will perform a non-randomized secondary analysis for each trial. The analysis will have the same primary outcome variable and use the same generalized linear mixed model (GLMM) with logit link. The primary control group will be all non-participating sites. We will also use a secondary analysis, where for each intervention site we will have two control sites that are matched on clinic size (within 50%), pre-intervention outcome rate (within 30 rankings of all sites), and region of the country. These analyses will adjust for the clinic-level variables clinic size, intervention outcome rate, region of the country, and the patient-level variables age, sex, and race.
Aim 2: secondary outcomes and analyses
Secondary outcomes for VIONE will be the prevalence of potentially inappropriate use of PPIs; the prevalence of potentially inappropriate use of aspirin; and the prevalence of potentially inappropriate use of central nervous system (CNS) active medications (muscle relaxants, anti-psychotics, Z-drugs, and benzodiazepines) or anticholinergic drugs; number of inappropriate medications at a patient level; monthly medication costs for all drugs, without regard to appropriateness; and number of pharmacist medication reviews.
Secondary outcomes for the DOAC trial will be the sub-components of the “flags” on the dashboard. These include potential mis-dosing, potential medication interactions, or concern for nonadherence. This follows the organizational structure of both the presentation of the flags on the dashboard and the key messages provided to the AD and LEAP teams. Process outcomes will be how often the provider uses the dashboard and rates of new DOAC starts compared to warfarin starts. These outcomes will be kept in alignment with our other work using the dashboard .
In stand-alone analyses of the CBTI trial, the primary outcome will be the prevalence of any CBTI receipt among primary care patients actively following with the clinic who are not in hospice/palliative care. Secondary outcomes will be the mean CBTI sessions completed and referrals to CBTI. Receipt of any CBTI and mean number of sessions will be measured by extracting from the medical records’ CBTI note templates completed by CBTI therapists. CBTI referrals will be measured according to consult requests in the medical record or by monthly therapist reports.
Analyses of secondary outcomes such as percent of potentially inappropriate use of PPI or mean number of CBTI sessions at each clinic month will be similar to that of the primary outcome using the GEE model accounting for correlation over time. We will also conduct separate analyses by trial with the dependent variables that are unique to each trial. For example, for the polypharmacy trial, the secondary outcome of interest is count of medications flagged as inappropriate based on Beers’ criteria . We will compare monthly rates of Beers’ list medication use between implementation strategies using GLMMs with log link.
Aim 3: exploration of potential predictors of clinical outcomes
Our process evaluation will follow a multi-phase concurrent nested mixed methods design . This design has three purposes: (1) help prepare all stakeholders and participants prior to the start of each trial; (2) monitor the progress of implementation; and (3) explain summative findings. Overall priority is placed on quantitative methods that guide the trials, while qualitative methods are embedded or “nested” within conduct of the trials.
Employee behavior and experience measures will be collected via five scales as listed in Table 1. Surveys will be administered via online link within invitation emails; administration will occur at baseline and 18 months post-baseline; satisfaction will be elicited at the end of each intervention (upon completion of the 6-month “core” LEAP program for LEAP team members and at the end of each AD visit for AD participants). Descriptive statistics will be generated and tests for differences across implementation strategy arms will be conducted using mixed models to account for within-clinic correlation.
Qualitative data will be collected prior to and 18 months following baseline via semi-structured interviews (virtual by telephone or conferencing software (e.g., MS Teams platform)). A purposive sample of key people (clinic leaders, supervisors, providers, and staff) at each clinic will be invited to participate so we can better understand the context in which the implementation strategies are/were deployed. The interview guides and qualitative analyses will be guided by the CFIR to identify potential and actual barriers and facilitators [63,64,65]. Principles embedded within the DSF will guide exploration of the degree of engagement in QI and teamwork . Prior to implementation, this information will help inform the work of the academic detailers and LEAP coaches; post-implementation, this information will help to explain quantitative findings within and across the trials. Interviews will be audio-recorded and transcribed verbatim. Pre-implementation, interviews will focus on collecting practical information using a rapid analysis approach [82, 83] to help tailor and adapt implementation for each participating clinic (see Additional file 3 for master interview guide). Post-implementation, qualitative analyses will seek insights on what kinds of improvements were made, barriers and facilitators to making improvements, reflections on/satisfaction with participation in AD/LEAP, and explore relationships between determinants, participants, and key stakeholders and how these may lead to building coalitions of support [7, 8]. We will combine qualitative findings with quantitative measures from Aims 1 and 3 to help explain changes (or lack of) over time.
Our process evaluation will rely on quantitative and qualitative data sources. Fidelity to each implementation strategy will be tracked by interventionists (the detailers and LEAP coaches) completing a mixed-methods self-assessment tool after each interaction (a coaching session for LEAP, detailing contact for AD). These assessments will be used to guide coach-supervisor and peer reflections on improvements, problem-solving, and mitigating barriers and amplifying facilitators of improvement efforts. We will also track participation by participants (individuals scheduled for detailing and/or LEAP team members) and completed assignments by LEAP teams. The academic detailer and LEAP coaches will enter notes for each interaction into a tracking system for each strategy. This data will be combined with pre-implementation and 18-month semi-structured interview data for further insights into barriers, facilitators, and problem-solving approaches used by LEAP coaches and detailers. Quantitative and qualitative data will be combined at the analysis or interpretation phase.
We will use a micro-costing method [84, 85] to determine the costs to deliver LEAP and AD. The LEAP coaches and academic detailers developed a list of the activities they will perform for each participating site. Depending on the specific activity, they determined the best way to record the time spent on each activity—e.g., logging the start and stop time each time the activity takes place vs. setting an estimated average time for activities that take approximately the same amount of time for each incidence (such as recurring meetings, responding to quick queries via e-mail, meeting preparation, etc.). In the latter case, the coaches and detailers simply record the occurrence of the activity, which is then assigned the estimated time. The coaches and detailers will log times for each activity, categorized by participating site, in a time tracking database. Using data from this database, we will calculate the average time required for each activity and apply this to the number of times it takes place over the course of performing the implementation strategy at a site. These data can then be used to determine an estimated total time required to perform the implementation strategy (LEAP or AD) at a site, which, combined with the hourly cost of the LEAP coach or detailer, can be used to calculate the total cost of employing the strategy at a site.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.