We observed decreases in the proportion of on-time vaccinations following EIR-introduction. However, we emphasize caution interpreting these findings as additional information is needed to understand if the changes observed reflect true estimates of timeliness or if they reflect “noise” due to incompleteness in EIR vaccination records and biases from the data capture process. From our sensitivity analysis, we observed that among those children receiving DTP2, there were improvements in DTP1 timeliness following EIR-introduction, indicating that our belief about incomplete EIR records may very well be valid and the decreases in timeliness observed from the primary analysis are not accurate.

Upon a crude comparison of our estimates to the most recent Demographic and Health Survey (DHS) conducted in Tanzania for 2015-2016, we found further evidence for inconsistent EIR data entry if we assume true immunization coverage did not vary much between 2015 and 2018. The DHS defines timeliness as children receiving recommended vaccinations before age 12 months [29]. We compared the survey estimates to EIR estimates using the same timeliness definition, and observed that prior to EIR-introduction national timeliness estimates were comparable across data sources, for instance DTP1 timeliness was 96.5% in the EIR and estimated to be 96.6% nationally from DHS (See Additional file 1). However, we found that estimates for DTP2, DTP3, and MCV1 decreased following EIR-introduction, suggesting inconsistent data entry post-EIR. We should note that DHSs use information recorded from vaccination cards and parental recall, these data have been found to be unreliable when compared to medical records, and therefore the DHS should not be considered the gold standard for our comparison [30].

Our research group previously noted, “completeness and quality of data input into a system dictates the accuracy of the estimates generated by the system, contingent on the system’s design, user compliance, and system maintenance”, and that “calculating accurate estimates of performance measures using EIR data will likely remain elusive until the challenges” have been addressed [31]. Implementation challenges that may have affected the completeness and/or accuracy of the data in our study included: inconsistent use of the EIR over time, the official requirement of completing dual data entry with the paper record remaining the official record potentially causing HCWs to ensure paper records were more complete than EIR records, inconsistent use of unique patient identifiers causing individuals to have multiple IDs, or poor data entry practices leading to inaccuracies due to workflow or training issues [32, 33]. Inconsistent use of the system is further complicated by facilities using different methods for entering data retrospectively and documenting outreach sessions, as well as staffing changes. During early to mid-2017, the Tanzanian government began restricting public employees who could not prove they had completed their secondary education via a paper certificate. This resulted in the loss of approximately 10,000 employees, including HCWs, who were fired from their positions if they could not present the certificate [34]. The drop in the workforce likely affected the capacity for HCWs to consistently use the EIR during our study period. Additionally, there were potentially server-side issues that could have prevented all data from being made available due to the server timing out or being overloaded. Also, the EIR’s validation rules may not have been functioning correctly, since we found many records with implausible vaccination dates based on the date of birth. Studies based in other countries have found similar challenges, particularly a high rate of under-reporting which causes underestimation of vaccination coverage, low IT literacy needed for adoption of DHI, and poor integration of the DHI within the existing health system [7, 35,36,37].

Considering our study team found no alternative reason for true vaccination timeliness to decrease following EIR-introduction after consulting with implementers and MOH staff, and that our results did not align with survey data, the most likely explanation is that our primary analysis results suffer from presumed threats to validity. Rather than timeliness decreasing at a population-level, it seems more likely that the results reflect the EIR implementation challenges described above. However, some facilities may have captured more accurate information in the EIR than previously captured by paper-based tools, and post-EIR, we may have observed true vaccination timeliness previously uncaptured in surveys and other assessment methods, which is useful when reviewing the descriptive results, but is less useful for our time-series analysis. It will be important to reanalyze the data again in a couple of years to understand if the trends have changed due to improvements in EIR-use. Additionally, these threats to validity also further violate the exchangeability assumption needed to assess impact using an interrupted time-series analysis, where we cannot assume that children’s records entered retrospectively versus prospectively are comparable. There may be multiple potential explanations for the trends seen in our data that would require primary data collection to confirm.

Can digital health interventions improve health outcomes?

We have outlined the potential implementation challenges that may have impacted our analysis; however, it is worth revisiting our theoretical framework to understand where gaps in adoption and use of the EIR may have occurred and subsequently affected vaccination timeliness. EIRs are implemented within complex health systems and require HCW activities to accommodate new workflows that incorporate the tool so their effectiveness relies on how well they are designed, developed, implemented, and used [21]. Consistent entry of data into an EIR may be dependent on HCW competency in using DHI tools, a facility’s internet and electricity connectivity, dual data entry, and HCW motivation, all of which could impact the completeness of EIR records.

Simply having HCWs utilize an electronic tool will not on its own increase timeliness of vaccinations; our causal linkage diagram shows that HCWs would have to use the information in the EIR to encourage caregivers to bring children on-time for their next scheduled immunization appointment and follow-up with defaulters. A realist review found similar findings, with only moderate to low-certainty evidence of EIRs improving data use amongst district- and facility-level staff [21, 38]. Although in Tanzania HCWs were trained to follow-up using EIR information, it is unclear how consistent this was performed. We did not estimate the effect of the SMS-reminders component of the intervention due to the limited follow-up time, but this is a future area of research as it could have affected timeliness. Finally, the length of time to observe behavior change is unknown as it likely varies by caregiver, HCW, and facility. Our assumptions may have underestimated the amount of time needed for HCWs’ behaviors to change.

Furthermore, these challenges beg the question: if these systems make no impact on timeliness, should we invest in them? Improvements have been seen in other settings, an EIR and SMS-reminder system was successfully deployed in Vietnam, where improvement in vaccination timeliness was observed 2 years following system introduction [6]. Additionally, in high-income settings, improvements in coverage and timeliness have been observed due to electronic systems [5]. However, it is worth first asking whether health-related outcomes are the best measures to quantify the impact of these digital systems. Considering the large footprint required for deployment, that often involves cross-team collaborations and implicates staff at each health system level, these systems have additional effects and impacts that are not captured by patient health outcome metrics but may still improve healthcare provision. Digital systems can provide a secure location for record storage, increase patient trust in the healthcare system, improve data quality and accessibility, and can reduce the burden of data management activities, freeing up time for staff to focus on patient care [39]. As the health benefits of DHI may take 3-13 years to be observed, the importance of these more proximal process outcomes should be acknowledged and these metrics used in DHI evaluations [40].

Study strengths

Our study took advantage of the opportunity to use individual-level RHIS data to conduct a quasi-experimental analysis. We were able to showcase the utility and power of these data for answering an implementation science research question by developing appropriate performance metrics within the Tanzanian context that considered changes over time, clinician practices at the facility and district levels, and the cohort of children we expected to see most affected by the EIR’s introduction. The study design process required that the research and implementation teams worked closely together to create a model that would accurately capture EIR introduction and use among facilities and provide interpretable findings.


We encountered numerous challenges using these data to answer our research question, mostly due to EIR-implementation complexities. Upon review of the challenges, we considered some of them to be natural to the process of implementing a new DHI. However, using these data “as-is” for our analysis was difficult due to poor data quality, necessitating a need for a clear understanding of the implementation setting and challenges to continuous data entry and use. There are several important contextual factors which were not accounted for in our models. Because time was centered on EIR introduction date, we were unable to account for the timing of other events or secular trends, such as the public employee dismissal, that could have potentially impacted vaccination timeliness, including changes made to the intervention package. Additionally, without further verification through other data sources and observations, it is difficult to know the level of completeness of the EIR data or when to consider the data to have “normalized”. We also recognized that our study could be affected by system impacts not accounted for in the analysis and other unmeasured confounders. A key limitation of this study is that we were unable to assess immunization coverage as an outcome since we lacked data on the full denominator population of children eligible for vaccination in the community. Future analyses should assess vaccination drop-out, comparing the number of children receiving the first to the third dose, as this is a better measure of timeliness because it accounts for the entire vaccine series and allows changes in timeliness to be measured at the individual-level, rather than facility-level.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit The Creative Commons Public Domain Dedication waiver ( applies to the data made available in this article, unless otherwise stated in a credit line to the data.


This article is autogenerated using RSS feeds and has not been created or edited by OA JF.

Click here for Source link (