In order to address the research gap described in the introduction, the current study has two purposes. First, the present article uses an item variant derived from Kongsuwannakul (2017) and seeks to use it to elicit verbal reports during ConCloze test-task engagement for validity inquiry. Second, the study seeks to link the findings from a thematic analysis to Bachman and Palmer’s (1996) model of communicative language ability, insofar as the context for test use is provided. The variable being described in the analyses of this study is the test-taking process or strategy that could take on different values (counts) derived from the verbal reports of the test takers who do the ConCloze tasks. It is part of substantive-validity argument of the construct validity (Messick 1994). The type of data of this research is thus observational (see Dewitt Wallace Library 2021).
To frame the data collection and analyses, two research questions are posed:
I. What are the test-taking processes and strategies ConCloze test takers use during task engagement?
II. Can the test-taking processes and strategies be linked meaningfully to the model of communicative language ability?
In the literature (Di Zhang 2020, p. 8), cognitive processes are said to be more subconscious and habitual when compared with test-tasking strategies, which are more conscious and willful. Thus, it is worth highlighting that the two research questions above involve test-tasking strategies as well. Formulating the research questions this way would give a more comprehensive picture to the field about test-task engagement in the ConCloze item type.
Instruments include (a) the ConCloze task design; (b) a test of functionality, (c) a list of language processes and test-taking strategies for thematic analyses. They will be discussed in order.
The design of the task is adapted from an item variant that is not a main design in Kongsuwannakul (2017). It is a modified constructed-response item format. The aim is to elicit a set of verbal reports and investigate whether the underlying test-taking strategies are comparable to those found in Kongsuwannakul (2017). Exploring the comparability of the strategies in the new verbalizations can be useful for two reasons. First, the current item format is different from the selected-response format administered mainly in Kongsuwannakul (2017) and in Kongsuwannakul (2019). Generally, strategies involved in task engagement could vary depending on test methods (see, e.g., Bucks 1991 for effects of different research methods on performing in listening tasks). Finding shared strategies can increase the generalizability of the findings to the universe of admissible observations. Second, this study aims to link the strategies to Bachman and Palmer’s (1996) model of communicative language ability. Comparability of the strategies would mean test usefulness in the context of language ability measurement. This study is thus providing a warrant for test utility for the ConCloze item type. Figure 8 shows the design used in this study, where the item options have just been offered to a test taker. The idea behind the design is that the item requires that the test taker has to attempt to answer the open-ended question first, and then, if the answer is not right or close to the correct answer, the multiple choices would be offered for the test taker to choose. Bachman and Palmer’s (1996) model has also been considered in designing the task here, where the source of the concordance is specified to be authentic language in use, the academic English genre of the Corpus of Contemporary American English (COCA), and the options of the items are from Gardner and Davies’s (2014) new academic vocabulary list, which is based on empirical findings. Specifying the authentic design as part of the contexts for test use is necessary in using the model for substantive validation of the ConCloze item type. In designing the test task, Bachman and Palmer’s (1996) model of communicative language ability is taken into consideration, also, in the form of having the option offered when the answer to the open-ended question is not right or close to the correct answer. In other words, the test takers communicate verbally or non-verbally in such a way that their task completion needs help, and the offer of the choices serve as a response to their communication.
In designing the task, a warm-up activity is also included. This is done by giving two simple math tasks for the test takers to do. The aim is to familiarize them with the task verbalization, such that they knew how to verbalize the task at hand. It is found that no test taker could not verbalize their thoughts while doing the warm-up tasks and the subsequent test tasks. The warm-up tasks were not video-recorded, though, because they are not the foci of the investigation.
Test of functionality
Given that the task design is new to the extent that it has never been used extensively for a whole study (the use of the design in Kongsuwannakul  is limited to a small-scale inquiry), functionality of the design is also tested. This is carried out simply by asking and answering a question if the design leads to a random and meaningless task engagement or to a meaningful one. Table 1 shows the results of the functionality test from the first task assigned to the test takers.
From Table 1, it is found that the first task of the design seems to draw on a meaningful task engagement from all the test takers. The verbalizers could verbalize their task engagement well, either with or without immediate-retrospective interview. It can thus be inferred that the item variant was unlikely to invoke a random task engagement from all the three tasks of the study. This finding is no surprising, considering that the test takers had warm-up tasks and were interviewed for their task verbalization one-on-one, compelling an active engagement with the researcher. Table 2 shows the details of the test takers in this study. They are purposively selected based on their first-language profile, such that they are from a variety of mother-tongue backgrounds, which is useful for increasing the power of generalization. The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request.
List of test-taking processes and strategies
Given that Kongsuwannakul (2019) failed to mention the test-taking strategies Focusing on clue-containing parts, and Choosing an action or solution suitable to the situation in hand derived from Kongsuwannakul (2017), the two strategies are included in the list of this study. Incorporating all other processes and strategies from Kongsuwannakul (2017) too, who devised them in the Grounded Theory-oriented way, Table 3 shows the list in the form of a data-analysis checklist for the ConCloze-taking language processes and strategies.
The checklist in Table 3 is designed such that it can be used for a practical verbal protocol analysis, in a similar way to a thematic analysis (see Braun and Clarke 2006). This means that the video records are played and the checklist is worked on, thereby producing the report of the processes and strategies mobilized in each verbalization session. The criteria for each of the processes and strategies will be discussed in turn. The investigation aims to seek the pattern in task engagement across the datasets, which are three verbal reports for each test taker, totaling 42 verbal reports.
The first strategy in the checklist of Table 3 is Assessing item components and difficulty. The criterion for this strategy is when the verbalization reflects the general strategy of the test taker in dealing with the task and item components, and is usually involved with such meta-cognitive evaluation as evaluating the difficulty of the test task in the overall picture. For example, Tian began her second ConCloze task by reading and circling the word ‘same’ in the question stem of the task. She emphasized that she needed to seek the word that would complete all the concordance lines, which are part of the item components. On other occasions, the task consideration is more subtle, though. For example, Yanis, while engaging in his first ConCloze task, said that he was not good on this kind of test task. This suggests that he was assessing the degree of difficulty of the test task and believed that the task was difficult for him. Finding this strategy, therefore, reflects differing ways of dealing with ConCloze test tasks, such that the task evaluation may be explicit or implicit. This agrees with Alderson (1990), who investigated verbal reports of a reading comprehension test and found that individual examinees may vary in how they mobilize their strategies in approaching different items.
The second entry in the checklist of Table 3 is Choosing an action or solution suitable to the situation in hand. The criterion is when a verbalization contains explicit decision-making in relation to the task being dealt with or when an action or a series of actions is justified retrospectively. For example, Ranma read the first three concordance lines in his first ConCloze task and then, instead of reading onwards to the fourth concordance line, decided to reread the first concordance line, trying to figure out what the missing word was. This strategy seems to accord with the notion of test management such as self-monitoring (cf. Cohen 2012 for a classification of test-taking strategies). Accordingly, it can be argued that, based on previous research, meta-cognitive planning and decision-making is inherent in ConCloze engagement alongside language-related processes.
The third strategy in the checklist of Table 3 is Focusing on clue-containing parts. The criterion is when a concordance line is verbalized only in part, usually prior to the KWIC position. More often than not, a pause is also observed for a moment of deep processing in the KWIC position, and possible words are verbalized in the place of the blank. This strategy explicitly emphasizes an element of decision-making on how to best deal with a particular situation in hand—to focus or simply read a whole concordance line. For example, Xavier read the first concordance line in his first ConCloze task and then, instead of reading onwards to the second concordance line, decided to focus by rereading the word which is right before the KWIC position of the first line. It is unknown exactly why the verbalizer decides at the moment then not to read an entire line but merely part of it. Yet, it is possible that the part focused on is meaningful for their solving the puzzle blank. This could be either because of the presence of some key words directly related to the missing KWIC in it, or because of their desire to direct concentration to the part that they believe really counts.
The fourth entry in the checklist of Table 3 is Rationalizing word combinations and word in context. The criterion is when the respondent tries to justify their decision made to the researcher, either on their own or upon immediate-retrospective interview. This usually entails explaining why one word should be the answer by means of, e.g., clarifying their word of choice, describing context of use for the word retrieved. In the case where a rationalization happens after the options are offered to the test taker, the process includes rejecting another option for the answer chosen. Typically, this process exhibits reactivity the words in a concordance line have towards the KWIC, and hence word combinations in the designation. For example, Vinona guessed that the word ‘harsh’ could be the answer for her third ConCloze task after reading the word ‘climate’ in the concordance line after the KWIC position. Then when she went on reading to the next concordance line and saw the word ‘community’ after the KWIC position, she said explicitly that she did not think that the word ‘harsh’ could be the right answer for the item anymore. Nonetheless, based on the observation of the researcher, despite the test takers’ attempt to justify their answer, it can sometimes be a challenge for the respondents to articulate why one option would be more appropriate than the others.
The fifth entry in the checklist of Table 3 is Recognizing word associate(s). The criterion is when a respondent picks individual words or short phrases from the concordance lines, mostly in order to support their decision or answer. For example, Ute, while working on the sixth line of the second test task, produced the word ‘national’ for the phrase ‘financial markets’ and reread the entire phrase ‘national financial markets’ before continuing with the task engagement. In less conspicuous cases, when interrogated for the clues they use to reach a decision, the respondents may take a short phrase, usually encompassing the KWIC position, in the way as if they are aware that the phrase may contain important clues. All this evidence seems to underpin an inference that the concordance prompt contains important clues to solving ConCloze tasks, word associates included.
The sixth entry in the checklist of Table 3 is Retrieving possible words. The criterion is when the respondent appears to be deep in thought near the KWIC blank. This is usually followed by the researcher’s verbal nudge, their attempt to produce an answer, or their acceptance that they do not know the answer. For example, Brianna paused at the KWIC blank in the second concordance line of her second ConCloze test task and then sought to produce an answer. An interpretation is that Brianna was likely to have gone into deep processing at the KWIC position, trying to find a right word for it, before coming up with a probable answer.
The seventh entry in the checklist of Table 3 is Taking in context information. This process is found in the analysis to always be followed by the processes Retrieving possible words and Testing compatibility of a retrieved/given word in context (described later). It can thus be argued that the take-in of context information related to the KWIC is a prerequisite for the testing of word–context compatibility. For example, Archa, in his first ConCloze test task, read the first two concordance lines, and then read the question stem before continuing to read concordance lines 3–5 and admitting that the task was a little difficult. An inference is that Archa had taken in the contextual information available in the first two concordance lines before checking the test task required of him in the question stem.
The eighth entry in the checklist of Table 3 is Testing compatibility of a given word in context. The criterion is when part or a whole of a concordance line is verbalized, usually with a sign of reactivity to the KWIC blank. Signs of reactivity include pausing near or at the KWIC blank, and uttering the preceding word(s) in an emphatic manner. Often, an option is also found to be inserted at the very position of the KWIC blank. This process is tied to the ConCloze engagement where an option sheet is offered to the test taker. For example, Qusai, after engaging with his second ConCloze test task for some time and being offered the option sheet, verbalized that he had to first check the suitability of each given word with the context of the concordance lines. Moreover, it is worth emphasizing that the testing of the compatibility of a given word in the concordance is very likely to be unable to take place meaningfully without processing context information. This means that regardless of the types of expected response (modified constructed-response or selected-response), a core process that must be performed in ConCloze would be Testing a meaningful compatibility of a word in context—a process that merges the two processes in the selected-response and constructed-response formats.
The last entry in the checklist of Table 3 is Testing compatibility of a retrieved word in context. The criterion is when a test taker explicitly tests whether the word that they have come up with fits in a concordance line. A test taker would most likely seek to test a word they retrieve from their mental lexicon when the task requires them to do so, a situation observed when no option sheet is offered yet. For example, Zach started his third ConCloze task by reading the first concordance line, producing a possible word for the KWIC blank, and then reading the word together with the second half of the concordance line. In light of the modified constructed-response item format used, it could be stated that this process is tied with the format.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.