Fall 2015

Global and node-level measures

In the Fall 2015 semester, 48 students were enrolled in the course (Physics I) on census day. Three students withdrew, and one student, who did not take all of the exams, received a failing grade. As a result, our networks for Fall 2015 have (N=44) nodes. Table 3 shows summary statistics for the fall semester. The left hand column of Fig. 4 shows the sociograms for each of the exam networks, with nodes colored by their CONCOR block membership. From the first test to the second, there was a sizable drop in the number of edges and reciprocity of named links. This corresponded to a lower density and average degree. The second exam also had a notably lower transitivity, though its average local clustering coefficient (AvgCC) remained comparable to the others. This contrast may occur because the local clustering coefficient tends to heavily weight low-degree nodes (Newman 2003), of which there were more on exam 2. Exams 2 and 4 had the highest average vertex-vertex distance ignoring disconnected node pairs (AvgDist) but the lowest vertex-vertex distance when disconnected node pairs are included (AvgDistUC). This occurs because exams 2 and 4 are (at least weakly) connected networks, while exams 1 and 3 have several unconnected sub-components. Finally, the degree assortativity varies widely across exams, being high for the first and last exams, a moderate value for the third, and effectively zero for the second test. Broadly, exam 2 seems to have scattered nascent social structure, which re-established itself later in the semester.

Fig. 4
figure 4

Fall 2015 sociograms (left column) and reduced networks (right column) for exams 1–4, partitioned using the CONCOR algorithm

Table 3 Summary statistics for the fall semester ((N=44) nodes)
Fig. 5
figure 5

Degree centrality distributions, scatterplots, and Spearman correlation coefficients for Fall 2015 exam 1

In addition to comparing the values of the network measures for each of the exams, we also analyzed centrality distributions for each exam network and explored how they evolved over the semester. In general, undirected versions of each statistic correlated well with their directed versions for the same exam. As an example, we present the different types of degree distributions for Fall 2015 Exam 1 in Fig. 5 as well as how they correlated with each other. These distributions were frequently not normal, so we used the Spearman correlation in this paper. We should note that the out-degree distribution is peaked in the middle, which is not common for network degree distributions. This is likely due to the fact that the tables that the students worked at had eight seats. The fact that this distribution is bell shaped is going to be strongly influenced by the fact that the average number of students sitting at each table was about six (44 students sitting at seven tables) and students were observed generally interacting with everyone at their table. Within a single network, it is not surprising that related centrality measures correlated with each other, and the correlation observed for the degree family of centrality statistics continued for the other families of centrality measures.

Fig. 6
figure 6

In-degree centrality distributions, scatterplots, and Spearman correlation coefficients for the Fall 2015 exams

Fig. 7
figure 7

In-closeness centrality distributions, scatterplots, and Spearman correlation coefficients for the Fall 2015 exams

We are also interested in how centrality lasts for nodes in evolving networks. In particular, are highly central nodes in early networks also highly central nodes in later networks? As we discussed earlier, we will focus on directed or inward measures of network centrality. In Fig. 6, we show the distributions, scatterplots, and correlations for in-degree centrality for each of the exams in Fall 2015. The correlation between exam 1 in-degree and any other exam in-degree was small ((R=0.17) was the largest). But for subsequent exams, the correlation became stronger. In Fig. 7, we show the distributions, scatterplots, and correlations for in-closeness centrality for each of the exams in Fall 2015. None of the correlations were significant ((R=0.34) was the largest correlation observed) and some correlations were negative. A significant fraction of nodes had a notably small in-closeness relative to the group, making the distributions bi-modal the correlation coefficients more difficult to interpret. In Fig. 8, we show the distributions, scatterplots, and correlations for directed eigenvector centrality for each of the exams in Fall 2015. The correlations for these distributions were similar to the in-closeness distributions in that they were driven by bi-modal distributions in the centrality scores. There was a notable correlation ((R=0.48) between exams 2 and 4), but this was highly influenced by the large fraction of nodes with an eigenvector centrality of nearly zero. In Fig. 9, we show the distributions, scatterplots, and correlations for directed betweenness centrality for each of the exams in Fall 2015. The betweenness statistic returns (somewhat) to the pattern that we observed with the degree statistic. However, it is interesting to note that the maximum betweenness score varied by approximately a factor of 5–6 on the different exams (approximately 100 on exams 1 and 3 and 500–600 on exams 2 and 4). In all distributions, the mode betweenness score was zero, suggesting that the correlation is due to the censored nature of the distributions. Another way of putting this is that in these classroom networks, most nodes were not very “between” regardless of the exam. However, there are also not a consistent set of students that are highly between that are driving the classroom collaboration networks during the fall semester.

Fig. 8
figure 8

Eigenvector (directed) centrality distributions, scatterplots, and Spearman correlation coefficients for the Fall 2015 exams

Fig. 9
figure 9

Betweenness (directed) centrality distributions, scatterplots, and Spearman correlation coefficients for the Fall 2015 exams

Network partitioning

We are also interested in looking at how network roles change over the course of the semester using CONCOR. Figure 10 illustrates the difference that can emerge in going from two to three CONCOR splits for the second exam in Fall 2015. The first and second positions split along fairly obvious lines: two subgroups which were not connected to each other in the first case, and two internally-dense subgroups with a smaller number of bridging links. The third position splits into a core group of five nodes and a two-node position of students who have no connections to each other, but are both peripheral to the core position. Finally, the fourth position splits into a dense group and a secondary group with only sparse links, either to each other or to the main group. We have investigated CONCOR block membership with 3 splits for each of the exams in the Fall 2015 semester.

Fig. 10
figure 10

Fall 2015 exam 2 showing 2 versus 3 CONCOR splits

Fig. 11
figure 11

Fall 2015 sociograms for exams 1–4 in each row labeled A–D respectively. In the left column, node color is given by the CONCOR algorithm, while on the right, node color is given by the edge-betweenness algorithm

Block membership on exams 1 and 2 have elements that are common to community detection algorithms, for example, isolated groups form several blocks. But they also exhibit notable differences. For example, on exam 1 (panel A in Fig. 4) CONCOR splits the top bundle of nodes into 4 different blocks (node color is generated from each network’s CONCOR block, and does not persist from network to network). These two cases also show a behavior that is unlikely or impossible in most community detection methods: grouping together nodes which are loosely or entirely unconnected to each other, but which belong together because of their linking behavior with respect to another network position. 1 Fig. 11 shows the same networks with node colors based on CONCOR (left hand column) and edge-betweenness (right hand column) for each of the exams. In each case, the CONCOR splits can show marked differences in nodes compared to edge-betweenness. For example in Exam 4 (Fig. 11, row D), the green group identified by edge-betweenness is almost reproduced by CONCOR with one notable exception. There is a single orange node, connecting that group to the rest of the network. That student is performing a function different from the rest of the “green” group. CONCOR can and often does detect clusters that are internally dense, but it can also highlight nodes that are visually part of a larger cluster but in fact are only peripherally tied to it. In the right hand column of Fig. 4, we present the reduced networks for these exams. We find that blocks are more connected on exams 1 and 2 than they are in exams 3 and 4 as evidenced by the number of inter-block connections.

Fig. 12
figure 12

Fall 2015 CONCOR block membership by exam. Traces are colored by students’ exam 1 block assignment

Finally, it is clear that the blocks found by CONCOR are not stable across exams during the fall as shown in the alluvial diagram (Fig. 12). This leads us to note a few things. First, the block number assigned by the algorithm is not significant—they just have to do with what block is the “easiest” to detach from the network. In general, nodes that are together in one block during an exam are not necessarily blocked together in subsequent exams, although a few cohorts of students stay together throughout the semester (for example, the band that goes from block 7 to block 5 to block 5 to block 1).

Spring 2016

Global and node-level measures

In the Spring 2016 semester, 36 students were enrolled in the course (Physics II) on census day. All of the students took all of the exams. Therefore, the networks for Spring 2016 have (N=36) nodes. As stated previously, some of the students ((N=22)) took the previous course in the Fall 2015 semester. The left hand column of Fig. 13 shows the sociograms for each of the exam networks, with nodes colored by their CONCOR block membership. Table 4 shows summary statistics for the spring semester. By and large, the summary statistics were much more stable across exams than during the fall semester. One possible mechanism to explain this stability is that group exams were an unfamiliar event for all students in the Fall 2015 semester, but not so for the 22 students in the Spring 2016 semester who were in the Fall 2015 course. This added familiarity with group exams could have led to a more swift adoption of group exam collaboration norms. The number of edges was consistent over the first three exams, and then increased slightly on the fourth exam. As a result, the density was also stable for all four exams. The average degree was stable for the first three exams and then increased by approximately 1 for the fourth exam. The reciprocity increased from exam 1 to exam 2 by 9%, but other shifts between exams were smaller. There aren’t notable differences between the fall networks and the spring networks based on these measures, and the global and local clustering coefficients were similar as well. Finally, the degree assortativity has the most variation across exams, being low for the first exam, spiking in the second exam, a moderate value for the third, and increasing again for the fourth test.

Fig. 13
figure 13

Spring 2016 sociograms (left column) and reduced networks (right column) for exams 1–4

Table 4 Summary statistics for the spring semester ((N=36) nodes)

The centrality distributions for each exam network in the Spring 2016 semester exhibited some similar patterns to those found in the fall semester. As an example, we present the different types of degree distributions for Spring 2016 Exam 1 in Fig. 14 as well their Spearman correlations.

Fig. 14
figure 14

Degree centrality distributions, scatterplots, and Spearman correlation coefficients for Spring 2016 exam 1

Fig. 15
figure 15

In-degree centrality distributions, scatterplots, and Spearman correlation coefficients for the Spring 2016 exams

What is more interesting about this analysis is investigating how centrality “lasted” in the Spring 2016 semester. Similarly to the fall semester, and somewhat surprisingly given the fact that slightly more than half of the class was familiar with the group exam paradigm, we observed that the centrality scores in first exam network did not correlate with centrality scores on future exam networks. In Fig. 15, we show the distributions, scatterplots, and correlations for in-degree centrality for each of the exams in Spring 2016. Here, the trend observed in the fall is amplified—correlations between exam 1 and other exams were small ((R=0.28) between exams 1 and 3 was the largest correlation score), and were stronger between exams 2-4, ranging from (R=0.70) to (R=0.88). In Fig. 16, we show the distributions, scatterplots, and correlations for in-closeness centrality for each of the exams in Spring 2016. For closeness, we observed a similar pattern to the degree statistic: Exam 1 did not correlate strongly with other exams, and exams 2–3 correlated more strongly with the subsequent exam ((R=0.68) for exams 2 and 3 and (R=0.71) for exams 3 and 4). We also noticed that the closeness statistic was bi-modal for exams 2–4. Figure 17 shows these plots for directed eigenvector centrality. Again, exam 1 does not correlate with other exams, and exams 2–4 correlate with each other, especially the subsequent exam ((R=0.77) for exams 2 and 3 and (R=0.67) for exams 3 and 4, while (R=0.55) for exam 2 and 4). These distributions are still bi-modal, but are not as extreme as the closeness distributions. Figure 18 shows directed betweenness centrality for each of the exams in Spring 2016. The betweenness centrality does not follow the pattern established for the other centrality statistics. In general, all of the correlations were weak, with the exception, of Exam 3 and Exam 4. During this semester it is important to note that a small number of students (one of whom was TMS) were highly active in engaging their classmates in the last two exams. Even after many other students (including those in the group they worked with most on other days) had decided their work was complete, turned in their exams, and left, this group of student continued to engage with the rest of the class asking questions, getting ideas, and sharing their own answers to the problems. It is reasonable to assume that many students identified at least one from this group due to this gregarious behavior.

Fig. 16
figure 16

In-closeness centrality distributions, scatterplots, and Spearman correlation coefficients for the Spring 2016 exams

Fig. 17
figure 17

Eigenvector (directed) centrality distributions, scatterplots, and Spearman correlation coefficients for the Spring 2016 exams

Fig. 18
figure 18

Betweenness (directed) centrality distributions, scatterplots, and Spearman correlation coefficients for the Spring 2016 exams

Fig. 19
figure 19

Spring 2016 CONCOR block membership by exam. Traces are colored by students’ exam 1 block assignment

The pattern that we have described for the centrality distributions is echoed in our analysis of CONCOR block membership. We noticed that there was a re-numbering of the blocks between exam 1 and exam 2 on the alluvial diagram presented in Fig. 19, but the groupings of nodes in blocks was relatively stable. After the first exam, this numbering of blocks was more stable than in the fall semester.

Fig. 20
figure 20

Spring 2016 sociograms for exams 1–4 in each row labeled A–D respectively. In the left column, node color is given by the CONCOR algorithm, while on the right, node color is given by the edge-betweenness algorithm

We also notice a more striking difference between the CONCOR blocks and the edge-betweenness communities in the spring networks. Sociograms for each of the exams are shown in each row Fig. 20, with the nodes colored by CONCOR block in the left hand column and edge-betweenness community in the right hand column. We notice that many of the communities identified by edge-betweenness, such as the orange community in the upper-left of the exam 1 network, has members from two blocks (lavender and grey), the lavender nodes are only connected to the rest of the network through the grey node. Other communities display similar properties, where there are a set of nodes that are more central to the community, and other nodes more peripheral to the community. This peripheral participation in the community is either due to that node being more strongly connected to other communities in the network (such as the grey node previously mentioned) or being more isolated from the community (such as the node at the top of the green community to the right of the orange community in the exam 1 network).

Aggregate CONCOR results

At the level of two CONCOR splits, essentially all the reduced networks look the same—they consist of “island” positions which connect internally with not enough exterior links to exceed the display threshold. This corresponds to a “coherent subgroups” structure, which has been observed in other active learning classrooms (Traxler et al. 2020). The three-split structure shown in Figs. 4 and 13 shows more complexity and numerous bridges between positions. Additional context that CONCOR can add, and which most community detection algorithms cannot, is a blocking for the four-exam sequence that uses each “snapshot” of links to group by linking behavior through the entire semester.

Fig. 21
figure 21

Aggregate weighted networks for Fall 2015 (left) and Spring 2016 (right), with nodes sized by in-degree and edge thickness scaled by weight. Node colors show the CONCOR block assignments using all four exam networks as input

Figure 21 shows the weighted full-semester networks colored by this multi-time-point block assignment. A few patterns emerge in this view that are not visible at the single-exam level. In Fall 2015, one node with high in- and out-degree is distinct enough in linking behavior to form its own cluster (green); this person shifted through different blocks during the semester and did not follow the general trend toward “settling down.” Another block (yellow) was a coherent subgroup that largely stayed the same through the semester. Several smaller groups (red, purple, dark blue, gray) are well connected to each other in aggregate, but split and reformed in various configurations over different exams.

In Spring 2016, the general stability of the network shows in a more modular structure in Fig. 21B, with fewer links between clusters than in the fall. Two small blocks (green and orange) consist of nodes that tend to be on the border of other clusters during the semester, appearing as a bridge point between more consistent groups. For most other blocks, the tendency is toward a high degree of internal communication and a less diverse set of bridging connections.

These time-sequenced CONCOR results, when compared with the blockings from individual exams, can identify students who form the core or nexus of a collaboration group, as distinct from others who are “short-term visitors” for one or two exams. From an instructor’s point of view, these nuances of collaboration are very difficult to capture in real-time, so the network results allow for a more thorough evaluation of how the group exam process played out. When combined with exam scores (the subject of ongoing analysis), this can also give a sense of how effective students’ self-directed groupings were at pooling their knowledge for the exam.

Study limitations

One of the limitations of this study is that, while group work is commonplace in schools at all levels, group assessment—in particular, high stakes group assessment—is much less common. Changes in the network could indicate growing familiarity with the group assessment paradigm rather than how the classroom social network is actually changing.

Finally, many studies in SNA look at multi-relational data. We did not collect any data in this regard due to the restrictions in our IRB. Students are indeed relating to one another in multiple ways that we are not capturing with these exam networks. For example, students who take these physics courses often take calculus courses concurrently, and have opportunities to interact in that setting.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Disclaimer:

This article is autogenerated using RSS feeds and has not been created or edited by OA JF.

Click here for Source link (https://www.springeropen.com/)