# BTP: automatic identification and prediction of tasks in data center networks – Journal of Cloud Computing

#### ByShaojun Zou, Wei Ji and Jiawei Huang

Sep 1, 2022 In this section, we describe the detailed design of BTP, including the task identification and prediction of task-size category. Besides, the main variables used in the paper are shown in Table 1.

In our design, task identification consists of the following three steps:

1. Attribute exploration: We first explore a set of flow, community, and application level attributes, which might be useful in task identification. For flow-level attributes, we consider flow’s starting time, average packet size, variance of packet size, average arrival interval of flows. Besides, the data center can be separated into multiple service groups. Given the fact that intra-group communication is frequent while inter-group communication is rare, any two flows belonging to the same community are likely to be inside one task. Finally, we can take advantage of application designs to help task identification. For example, the port assignment rule in Spark reveals that the data transmitted to the same executor typically have the same destination IP and port.

2. Distance calculation: Given the attributes, BTP can calculate distances between flows to capture task relationships. For flows in the same task, they have smaller distances. The key problem is that we need a good metric to effectively reflect the importance of each attribute and task relationships.

In our design, we leverage distance metric learning  to obtain the relationship between any two flows. If the distance between two flows is smaller than the given threshold, then they belong to the same task. Otherwise, they belong to the different tasks. The distance between two flows fi and fj can be calculated as

$$d(f_{i}, f_{j}) =||f_{i}- f_{j}||_{A}=sqrt{(f_{i}-f_{j})^{T}A(f_{i}- f_{j})},$$

(1)

where A is the distance matrix and reflects weights of different attributes.

We desire a metric where any pairs of flows in a task have small distances while any pairs of flows belonging to different tasks have distances larger than a given threshold. Here, we simplify the problem by restricting A to be diagonal and adopt Newton-Raphson method  solve it.

3. Identifying tasks via clustering: According to the features of aforementioned attributes and distance measurement learning, BTP master leverages the clustering algorithm R-DBSCAN  to identify tasks. In fact, R-DBSCAN is a variant of DBSCAN  and has several advantages listed below. First, although the number of tasks changes dynamically during the transmission process and cannot be determined, DBSCAN can automatically divide clusters within the ε domain without specifying the number of tasks in advance. Second, the tasks in the data center network may be composed of one single flow and R-DBSCAN can extract such tasks. Finally, R-DBSCAN uses the core objects in the cluster as an index to reduce the running time and time complexity.

Next, we introduce how the clustering algorithm works. It first scans the data set to obtain the parent object set and the child object set while establishing the relationship between the parent and child objects. Here, the parent object set refers to the set of flows that have the same traffic characteristics while the child object set is defined as the set of flows that are not classified. Then the parent object set is regarded as the data set and the master calls the DBSCAN to obtain the cluster of the parent object. Finally, based on the relationship between the parent and child objects, the master can obtain all the child objects in the identified parent object and merges them into a cluster, which is the task.

### The prediction mechanism of task-size category

When the task is obtained through the task identification, we can get the task width. When the task is in transit, both its length and size are collected and put them into the data set. According to these information, we use bayesian decision to predict the task-size category.

From these traffic characteristics, we observe that the number of bytes of narrow tasks only accounts for 0.68% of total byte amount, but the number of narrow tasks accounts for 68%. However, although the number of wide tasks only accounts for 32%, its byte amount accounts for 99.32%. Therefore, we think that there is a correlation between task width and size. That is, the smaller width of task is, the greater possibility that it is a small task. On the contrary, the larger width of the task, the greater possibility that it is a large task.

In order to verify our conjecture, we further analyze the above traces and try to find the maximum threshold wmax of task width, so that when the task width is smaller than wmax, it is a small task with high probability. If the task size is less than the threshold Ks of small task, then it is a small task. Let Fs(x),Fw(x),Fl(x) denote the cumulative distribution functions of task size, task width and task length, respectively. Therefore, given the threshold Ks, we can calculate the maximum threshold wmax of the task width as

begin{aligned} & min ; w_{max} \ & subject ; to ; frac{F_{s,w}(K_{s},w_{max})}{F_{w}(w_{max})}geq alpha, end{aligned}

(2)

Where Fs,w(x) is the joint distribution function of task size and task width. That is, when the width of a task is less than wmax, it is the probability that the task is small task. If this probability is greater than α (i.e., 85%), then the minimum value of wmax is the expected threshold value.

Furthermore, we analyze the traces from Facebook’s production data center. Figure 2 shows the distribution of task width. We observe that 80% of small tasks have a task width less than 15. Therefore, when the task width is less than 15, it is a narrow task.

Similar to the literature , Ks and wmax are set to 100 MB and 50, respectively. Note that the task width of 50 is also the threshold value for distinguishing the narrow tasks from the wide tasks. Figure 3 (a) and (b) show the distributions of task size and length, respectively.

Since small tasks are preferentially scheduled during the transmission process, we need to identify small tasks from these wide tasks. We guess that this kind of wide task is a small task potentially caused by the following reason: the task width just exceeds the threshold, but the task length is extremely small. As a result, the number of task bytes is small. In order to verify our guess, we further analyze the data and present the results.

Figure 4 (a) and (b) are the cumulative distribution functions of the task width of small tasks (task size < 100MB) and short tasks (task length < 5MB).

Therefore, for the wide task, we want to find the threshold Δ of task width. That is, when the task width is greater than K, it is a small task with high probability. Then we can calculate the task width threshold Δ as

begin{aligned} & min ; Delta \ & subject ; to ; frac{P(SW)}{P(W)}geq beta, end{aligned}

(3)

where P(W) represents the probability that the task width in the trace is within the range of [K,Δ] while P(SW) is the probability that the task width is within the range of [K,Δ] and it is a small task. When β and the threshold of the small task are respectively set to 85% and 100MB, the threshold of task width Δ is 60. As shown in Fig. 5, when the width of a wide task is 50  60, the probability that it is a small task is 85%.

Finally, we predict the category of task size via bayesian decision, which consists of three stages. The first stage is the preparation stage, including feature attribute selection and division. The second stage is the training stage, which trains the sample data and calculates the frequency of each category as well as the probability of each category under different conditions. The third stage is the prediction and evaluation stage that predicts the task-size category and evaluates the accuracy as well as error rate.

Training data: We use Facebook’s real traces to test. These traces are collected from one-hour Hive/MapReduce data warehouse in Facebook’s data center with 3,000 machines and 150 ToR switches. There are 526 tasks and about 7×105 flows in these traces (i.e., data set). In this paper, the data set is divided into training samples and test samples at the ratio of (frac {2}{5}) and (frac {3}{5}), respectively. Then we calculate the small task probability P(S=0) and the large task probability P(S=1) according to the sample data. Finally, we calculate the conditional probability P(wi|S=0) and P(wi|S=1) of task widths with different task-size categories, where wiW.

The Prediction and evaluation of task-size category: Given the task width, we can calculate the probability of the task-size category (i.e., P(S=0|wi) and P(S=1|wi)). In our design, if P(S=0|wi) is larger than P(S=1|wi), then it is a small task. Otherwise, it is a large task. Therefore, P(S=0|wi) and P(S=1|wi) can be expressed as

$$P(S=0|w_{i}) =frac{P(w_{i}|S=0)times P(S=0)}{P(w_{i})},$$

(4)

$$P(S=1|w_{i}) =frac{P(w_{i}|S=1)times P(S=1)}{P(w_{i})},$$

(5)

## Rights and permissions

##### Disclaimer: 