This pre-post pilot was conducted between July 2017 and June 2018 at six government health facilities in Kenya: three in Nairobi County, one in Homa Bay County, one in Kisumu County, and one in Siaya County. The six facilities were purposively selected to represent diversity in size and level of services, with two County Hospitals, two sub-County hospitals, and two health centers. All facilities provided comprehensive HIV testing and care services, with VL samples sent to centralized laboratories for processing. All invited facilities initially agreed to participate, but due to delays in implementation, one facility in Homa Bay was excluded from analyses, leaving a final sample size of five facilities. During the year prior to the introduction of the implementation strategy, there were two nation-wide health care worker strikes and an initial and repeated presidential election; these events have been documented to have negatively impacted service delivery across Kenya [13,14,15].
This study was reviewed and approved by the University of Washington Institutional Review Board, the Kenyatta National Hospital Ethics and Research Committee, and the National Commission for Science, Technology, and Innovation (NACOSTI). Additionally, following ethical approval, the study was reviewed and approved by County and sub-County health offices, and further permission was sought from each facility’s medical superintendent and in-charge prior to facility engagement.
SAIA implementation strategy
SAIA consists of three systems engineering  tools that are utilized in a cyclical approach by frontline health care workers and managers to identify and prioritize gaps in service delivery and test micro-interventions to improve care delivery systems: cascade analysis tool , flow mapping, and continuous quality improvement [8, 16, 17].
Pediatric/adolescent Cascade analysis tool (PedCAT)
The cascade analysis tool (CAT)  is an Excel-based simple simulation model with an optimization function. The CAT is populated by routine program data for a specific facility and automatically quantifies the drop off at each step of the HIV cascade and quantifies the additional number of individuals who would complete all steps of the cascade if each single step were individually optimized. The goal of the CAT is to quantify and prioritize gaps in service delivery and allow frontline health care workers to access and interpret their own data.
This tool required adaptation from the original SAIA package to be applicable to the pediatric and adolescent HIV cascade, adapting the original CAT to be the PedCAT. We conducted a physical walk through of each pilot health facility to characterize health information registers, cards, and other data collection and reporting tools; observe patient flow; and ask each operator of the health system to describe what activities were conducted at each step. Following this data mapping activity, an initial tool was created and presented to clinic managers and frontline health care workers to determine whether the service flow modeled in the tool reflected realistic flow patterns, made realistic assumptions, and was sufficiently simple to be useful for routine use; this process was similar to member checking in qualitative research. We conducted several rounds of revisions to the PedCAT before a final tool was agreed upon (Fig. 1).
Flow mapping, also known as value stream mapping or process mapping, consists of frontline health care workers creating a visual map of their health system, drawing the sequential steps taken by clients, data, or samples; the goal of flow mapping is to identify system inefficiencies and bottlenecks and also visualize system reorganization [6, 18].
Continuous quality improvement (CQI)
CQI has a large body of effectiveness literature supporting its use in a range of settings [19,20,21], and there are diverse ways in which CQI is delivered. In this study, we utilized the Model for Improvement and “plan, do, study, act (PDSA)” cycles, in which health care worker teams address the following questions in a group setting: What are we trying to accomplish? How will we know a change is an improvement?, and What change can we make that will result in an improvement? and then Plan the details of a test of a micro-change, Do the micro-change, Study whether the micro-change impacted an identified indicator, and Act to either adapt, adopt, or abandon that micro-change based on the indicator data .
Intended use of tools
The three SAIA-PEDS tools are intended for combined use in a cyclical way, with flexibility to more or less heavily utilize tools that health care workers find useful or burdensome in a given local setting.
Training and staffing of implementation strategy
Three study staff members were responsible for training frontline health care workers in the SAIA-PEDS tools, and two of these study staff members were responsible for periodic visits to the facilities to coach and mentor frontline health care workers in the use of the SAIA-PEDS tools. The study staff members received intensive training in PedCAT interpretation, flow mapping, and CQI coaching; prior to study activities, both study team members had experience in clinical care for children and adolescents in Kenya.
Frontline health care workers were trained together in a half-day offsite session; facility in-charges were responsible for selecting and recruiting at least one representative from each of the following service delivery areas: outpatient, inpatient, HIV testing services, HIV care clinic, and laboratory to attend the training. Training covered the basics of PedCAT interpretation, the basics of CQI with a practical exercise in “plan, do, study, act (PDSA),” and included creating a flow map of a facility’s patient flow. Following this half day training, study staff visited each facility for a facility-wide sensitization meeting, which covered the intent of the implementation strategy and allowed all facility staff to ask questions.
Schedule of follow-up visits and data collection at facilities
The intended schedule for coaching and mentorship visits by study staff to each facility was weekly for the first 1 month, every 2 weeks for the next 2 months, and monthly for the final 3 months. The intended meeting members were the frontline health care workers trained in the initial training, but substitutions could be made by the in-charge due to staff turnover or transfer. During each 1–2-h coaching and mentorship meeting, study staff guided the facility team through reviewing their micro-changes using PDSA cycles, reviewing data that facility staff had collected to inform indicators to evaluate micro-changes. The PedCAT and flow mapping tools were used as needed to identify and prioritize gaps and brainstorm service flow reorganization.
Data sources and outcome definitions
We considered a range of routine data sources (described in detail elsewhere ) with the intent of using easily accessible and accurate data that allowed disaggregation of children (0–9 years), adolescents (10–19 years), and young adults (20–24 years). Ultimately, paper registers and electronic medical records were utilized; two data abstractors per facility were engaged to abstract data from paper registers or electronic medical records, depending on the facility’s data systems. We abstracted anonymous, individual-level, count data aggregated to the calendar day and age band (0–4, 5–9, 10–14, 15–19, and 20–24 years) during data collection. Count data were entered on tablets using Open Data Kit . Daily count data were subsequently aggregated to the month during data cleaning.
Five outcome variables were assessed: HIV testing uptake: # children and adolescents who received HIV testing services (numerator)/# children and adolescents who presented to outpatient or inpatient departments (denominator); Linkage to care: # children and adolescents with new HIV care files (numerator)/# children and adolescents who were reactive in HIV testing (denominator); ART initiation: # children and adolescents starting ART (numerator)/# children and adolescents who were linked to care (denominator); VL monitoring: # children and adolescents with a VL sample collected (numerator)/# children and adolescents due for VL testing (denominator); VL suppression: # children and adolescents with VL < 1000 copies/mL (numerator)/# children and adolescents with VL samples taken (denominator).
All numerator and denominator data were directly abstracted from registers with the exception of the number of children due for a VL sample, which was calculated as a monthly average based on the HIV care guidelines at the time, which indicated six-monthly VL monitoring during the first year of treatment, followed by annual VL monitoring. Of note, the individuals in the numerator and denominator of each outcome were not required to be the same individuals; this was not a longitudinal cohort. As a result, the ratios of numerator to denominator often exceeded one, particularly for indicators where substantial in-migration was common; for example, some facilities had substantial numbers of children diagnosed with HIV at other facilities linking to care at their facility for HIV care services. Conversely, the ratio of numerator to denominator cannot be accurately interpreted as proportions or absolute coverage because some groups of individuals may be systematically missing from denominators for data abstraction simplification; for example, HIV testing uptake denominators include only those children and adolescents accessing care at outpatient and inpatient facilities and would not include those seeking other services (e.g. family planning, specialty clinics). Further details are described elsewhere .
We considered the baseline period to be the six months prior to facility training in SAIA-PEDS (July 2017–December 2017); we considered the implementation strategy period to be the 6 months following the facility training in SAIA-PEDS (January 2018–June 2018). We conducted a simple pre-post analysis and interrupted time series analyses using linear mixed effects models, including random intercepts and random slopes to account for health facility clustering. Model parameterization details are included in the Appendix. We conducted five separate models for each of the five study outcomes. The presented average monthly counts are modeled values that are geometric means across 5 facilities derived from linear mixed-effects models utilizing log transformed values. Changes were considered substantial if they were 20% greater or 20% less than the null value (relative risks of ≥ 1.2 or ≤ 0.8). All analyses were conducted using STATA 14 (StataCorp. 2019. Stata Statistical Software: Release 16. College Station, TX: StataCorp LLC), and all plots were created using R (R Core Team, 2013).