Regarding the development of the three instruments discussed here, we found little orientation in previous studies due to the novelty of the pandemic-related topics. Hence, the instruments had to be developed largely from scratch. Even though deeper analyses regarding data quality were carried out across the multi-perspective set of questionnaires in the ERiK-study, this was only possible to some extent for the three ERiK-/CKS-instruments due to time-constraints before the field launches. First analyses indicate that reliability and validity of the described three instruments are adequate, but further exploration will be carried out.

In the ERiK study, data quality was assessed based on content validity (e.g., Sireci, 1998), usefulness (e.g., Benova, 2020), distributions of variables, missingness patterns and average response time. In a first step, the distributions and missingness patterns of the related variables have been critically examined, e.g., whether distributions are skewed, the number of missings and whether these missings occurred particularly frequently in some populations. We considered, for example, an item-non response of 7% on the two items on the subjective level of directors’ informedness regarding pandemic-related regulations such as the protection of ECEC staff and protective measures for children as rather high. However, the probability to not response on both items did not statistically significantly vary along some core demographic characteristics of directors (gender, age, education at alpha = 0.05).

This first evaluation process allowed for a quick evaluation of the newly generated instrument. At the same time, the average processing time of the overall web questionnaires was rather high (between 35 min for pedagogical staff, about 55 min for directorsFootnote 6 and 105 minutes for youth welfare offices, Klinkhammer et al., in prep). In survey methodology, questionnaires that take more than 30 min to complete are often considered “unreasonable,” and too long questionnaires can be one of the main sources of erroneous responses (Tourangeau et al., 2009). Hence, these durations may have influenced the sample composition and the response behavior. Indeed, in two additional non-response surveys of providers and directors, evidence shows that largely time restrictions led to a non-response, e.g., 17% of the directors considered the questionnaire too long and therefore did not participate in the ERiK-Surveys 2020 (cf. Schacht et al., in prep). These steps provided the basis for an extensive reduction of the questions for a renewed cross-sectional survey in 2022. The directors’ questionnaire was shortened by 13 questions reducing the overall number of questions to 90. Particularly relevant for the reductions was the extent to which the items were also classified as still useful. Usefulness was assumed if the items were analyzed and presented in a research report for the project and also if external specialists considered the questions relevant for 2022. For this purpose, all questions were discussed together in three sessions with regard to their usefulness. Questions omitted include those from the ERiK instrument presented here, as it was assumed that the ERiK instrument would capture specific pandemic-related information that could not be collected in 2022.

For the ERiK overall questionnaire, descriptive statistics based on the net sample of the ERiK-Surveys 2020 were compared with information from the National Child and Youth Welfare Statistics, which contains information about the children attending ECEC and the staff working in the field (cf. Statistisches Bundesamt, 2020) (external validity). For the survey of directors, slight variations in these descriptive statistics were found. Directors with higher volumes of employment or with a university degree relevant to their work compared to other degrees were overrepresented (compared to the KJH; based on a chi-square goodness of fit test being statistically significant at alpha = 0.05). This marginal bias of the two socio-demographic variables studied was compensated for by a corresponding weighting procedure (cf. Schacht et al. in prep.).Footnote 7 With regard to the ERiK instrument presented here, however, such an external validity comparison with the National Child and Youth Welfare Statistics was not possible, as the latter do not contain any corresponding information on, e.g., directors’ subjective level of informedness on protection of ECEC staff and protective measures for children.

Nonetheless, comparing directors’ subjective level of informedness on protection of ECEC staff and protective measures for children with the views of other stakeholders captured in the ERiK-Surveys 2020 gives a sense of the data quality achieved with these two items. As can be seen in Fig. 1, directors felt subjectively less well informed compared to other ECEC stakeholders regarding both items. The ERiK Team and experts from the field would have expected that agencies and youth welfare offices would have felt particularly well informed, since they are in particularly close contact with other relevant stakeholders such as the state health authorities. It also seems plausible that, on average across all stakeholders, information on protective measures for children (resp. parents) was better than that on self-protection, protection of staff and childminders (depending on the target population). In this respect, the patterns presented here appear extremely realistic for the ERiK team and experts that discussed the results with us. The preliminary results confirm the data quality achieved with these two new items included in the ERiK instrument.Footnote 8

Fig. 1
figure 1

Directors’ subjective level of informedness on protection of ECEC staff and protective measures for children compared to information given by the other four ERiK target populations. Legend: as reported by directors (D), pedagogical staff (PS), childminders (C), youth offices (Y) and providers (P), ERiK-Surveys 2020, weighted results

Regarding data quality of the CKS, only limited comparisons with results of official statistics were possible due to the new development of pandemic-related items in the study. However, since the sample consisted of ERiK participants, CKS respondents can be compared to the former group in order to determine whether non-response was random. Results of a chi-square goodness of fit test show that the state of Berlin (3.8% in the original sample versus 2.9% among respondents) and municipalities with at least 500,000 inhabitants (12.1 versus 10.6%) are slightly underrepresented, and those with less than 2000 inhabitants overrepresented (5.4 versus 9.9%) compared to the ERiK data. The average response time for the CKS directors’ questionnaire was about 35 minutes (SD = 13.72), which also exceeds the suggested 30-min threshold.

Although only certain items are analysed here, comparisons of the director’s data are possible on multiple levels: with ERiK data, as described above, with external data, with the directors’ data itself, but from a different measurement point, or with other data from the CKS that are connected to the directors’ questionnaire.

To catch changes over time—both within and between centers or persons—, the study uses multiple measurement points in most surveys. In case of the directors’ survey, four groups of directors (with differing starting dates) were each interviewed twice, with about four months between the two measurement points (questionnaires did not differ between groups, but between measurement points). This is especially helpful against the background of a dynamic infection process in Germany during this study, because specific measures and events in ECEC centers depend strongly on the regional infection situation and data can then also be contrasted with changes in infection numbers.

The CKS-instruments contain 16 items with a 5-point Likert scale and 7 items with a 6-point Likert scale, respectively, for a wide variety of measures, enabling respondents to indicate even small changes for example in the prevalence of mask-wearing or of group separations in ECEC settings. Together with data of the COVID-19 cases in the population, this makes it possible to compare behavioral and infection trends, as can be seen in Fig. 2. The lines clearly show that during the “second wave” in the autumn and winter of 2020/2021, infection rates as well as the likelihood of pedagogical staff wearing facemasks changed significantly. After that, infection rates again decreased, while the prevalence of mask-wearing remained largely unchanged, possibly due to habituation effects or due to mask mandates staying in place. Analyses such as this one can serve as external validity tests, when data from the survey is merged with data from other sources, even though this figure can only show correlation and no causal effect.

Fig. 2
figure 2

Weekly COVID-19 infection rates and prevalence of mask-wearing of pedagogical staff in interactions with colleagues. Legend: as reported by directors in the respective ECEC centers, CKS-Survey 2020

But even when there are no significant changes in the mean values of a variable (e.g., similar assessment of the implementation across the ECEC centers), valuable information can still be obtained by analysing intra-individual changes within a person or institution (e.g., learning effects can be assumed in some cases). Figure 3 for instance shows how the frequency of value 4 (i.e., measures implemented to a “good” extent) does not change very much overall; however, only a small share of those reporting a 4 in the first survey also reported that answer in the second one. The prevalence of such intra-individual variance can be important for multivariate analyses, such as fixed-effects regression models which require a certain amount of variance on dependent or independent variables.

Fig. 3
figure 3

Extent of implementation of group separation indoors as reported by directors. Legend: changes in answers between first and second point of measurement, CKS-Survey 2020

Furthermore, it can give hints as to how valid the used instrument is. In a situation as volatile as the ongoing pandemic, where pedagogical staff is constantly confronted with changing situations and challenges from parents and policymakers, differing answers at different points in time can be expected. Not finding such intra-individual changes could therefore be a sign of a lack of validity, since the instrument would then apparently measure some other aspect that is not influenced by the pandemic.

Other quality analyses are also possible by combining information from different sources: In the case of the CKS directors’ survey, parents and pedagogical staff of a subsample were also interviewed. This makes it possible to merge the respective data sets and compare the information given by different groups, thus gaining insights into the multi-perspective answering and into the reliability of the instruments.

For example, when comparing the directors survey (measurement point 2) and the staff survey (measurement point 1), which mostly took place within four weeks of one another, similarities as well as differences can be found: As to how well the observance of physical distancing between staff is working, there is no strong correlation between the information reported by staff and directors (Kendall’s tau = .03). Overall, directors see the implementation far more optimistic (mean 3.83 vs 2.57 on a scale from 1 to 5). However, smaller differences (3.87 vs. 3.48) and stronger correlations (Kendall’s tau = .20) can be observed when respondents are asked to rate how well physical distancing between staff and children of other groups is working. These results illustrate that pedagogical staff and directors can perceive situations quite differently, which has to be taken into account when analyses are conducted.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Disclaimer:

This article is autogenerated using RSS feeds and has not been created or edited by OA JF.

Click here for Source link (https://www.biomedcentral.com/)