Sine–cosine algorithm (SCA)

The SCA algorithm is a stochastic population-based optimization technique inspired by the mathematical functions of sine and cosine. It was recently developed in [32]. Because of the use of the sine and cosine mathematical functions, this algorithm provides cyclic space for exploitation, in which search agents can update their position as per a position changing equation, as:

$$Y_{j}^{n + 1} = Y_{j}^{n} + a_{1} sin left( {a_{2} } right) times left| {a_{3} P_{j}^{n} – Y_{j}^{n} } right|$$

(2)

$$Y_{j}^{n + 1} = Y_{j}^{n} + a_{1} cos left( {a_{2} } right) times left| {a_{3} P_{j}^{n} – Y_{j}^{n} } right|$$

(3)

where (a_{1}), (a_{2}), (a_{3}) are the arbitrary numbers and are the main parameters of this algorithm, and (Y_{j}^{n + 1}) and (Y_{j}^{n}) are the next and current positions of the solution at the time of the nth iteration in the jth dimension, respectively. (P_{j}^{n}) is the terminus point in the jth dimension. Equations (2) and (3) can be combined by another parameter (a_{4}). Depending upon the value of (a_{4}) which is an arbitrary numeral in the array of [0, 1], the algorithm will choose the equation for renovating the position of the investigating agent, given as:

$${ }Y_{j}^{n + 1} = left{ {begin{array}{*{20}c} {Y_{j}^{n} + a_{1} sin left( {a_{2} } right) times left| {a_{3} P_{j}^{n} – Y_{j}^{n} } right|;;if;; a_{4} < 0.5 } \ {Y_{j}^{n} + a_{1} cos left( {a_{2} } right) times left| {a_{3} P_{j}^{n} – Y_{j}^{n} } right|;,if;; a_{4} ge 0.5} \ end{array} } right.$$

(4)

The parameter (a_{1}) will decide the next location province, which can be between the destination and another location. It has the objective of harmonizing the exploitation and exploration of this optimizer, and its value can be given by:

$$a_{1} = b – nfrac{b}{N}$$

(5)

where N is the maximum number of iterations, b is a constant, and n represents the current iteration.The direction of movement of the search agent, whether towards the global optima or away from it, is decided by (a_{2}). Better results are obtained by considering the range of (a_{2}) between [− 2 to 2], while sine and cosine functions are between 0 to 2π. The objective of (a_{3}) is to emphasize the destination and is implemented by choosing a random value. If it is greater than 1 it will stochastically emphasize the destination while it will deemphasize if less than 1.

Improved sine–cosine algorithm (ISCA)

Nevertheless, SCA is very capable of handling the real-time problem, though there is scope to improve the algorithm to improve the rate of convergence, the ability not to be trapped in nearby optima, and to maintain a balance between exploration and exploitation. The above limitations of traditional SCA are due to the updating scheme of its search agents. In SCA, most of the search agents run towards the global optima, but sometimes can get trapped in local optima and thus converge to that premature local optima. To overcome this, a new scheme for updating the location of the search agent is introduced in this paper. This scheme mainly consists of the SCA/best-target shown in (6) and (7), and the SCA/rand-target shown in (8) and (9). The best-target search agent of the SCA assists the search agents in moving towards the best position obtained so far and searching locally around the best search agent, which results in the intensification of the solution. On the other hand, the rand-target search agent of SCA moves the search agents towards the arbitrary position, which results in more search space exploration. In the next step, the exploring capability of both schemes is combined by taking the mean as shown in (10), and the resultant is set as the new search agent. The characteristics of the proposed ISCA are as follows:

  1. 1.

    It maintains balance between exploration and exploitation.

  2. 2.

    It has fewer parameters, i.e., the number of parameters of the proposed ISCA is 3 while it is 4 in the original SCA.

  3. 3.

    It has a better convergence rate than the SCA.

  4. 4.

    It avoids getting trapped in local optima.

The values of the three parameters are decided in accordance with (11), (12), and (13), respectively.

$$Y_{1} = Y_{best}^{n} + a_{1} sin left( {a_{2} } right) times left| {a_{3} Y_{rand}^{n} – Y_{j}^{n} } right|$$

(6)

$$Y_{2} = Y_{best}^{n} + a_{1} cos left( {a_{2} } right) times left| {a_{3} Y_{rand}^{n} – Y_{j}^{n} } right|$$

(7)

$$Y_{3} = Y_{rand}^{n} + a_{1} sin left( {a_{2} } right) times left| {a_{3} Y_{best}^{n} – Y_{j}^{n} } right|$$

(8)

$$Y_{4} = Y_{rand}^{n} + a_{1} cos left( {a_{2} } right) times left| {a_{3} Y_{best}^{n} – Y_{j}^{n} } right|$$

(9)

$$Y_{j}^{n + 1} = Meanleft( {Y_{1} ,Y_{2} ,Y_{3} ,Y_{4} } right)$$

(10)

$$a_{1} = bleft( {1 – frac{b}{N}} right)$$

(11)

$$a_{2} = 2 times pi times randleft( {0, 1} right)$$

(12)

$$a_{3} = 2 times randleft( {0, 1} right)$$

(13)

where b is a constant which is set to 2, N is the maximum number of iterations, n is the current iteration, and (randleft( {0, 1} right)) denotes a random number that will be generated in the range of 0–1.

The flow chart of the ISCA is shown in Fig. 3. The algorithm mainly has three steps, i.e., initialization, iteration, and termination. In the first step, the algorithm initializes the parameters, such as the maximum number of iterations (N), search agent number (c), number of variables to be tuned (d) with their upper (ub) and lower (lb) bound, first set of search agents (solution). In the second step, it generates a single new search agent by taking the average of four search agents which are being generated by the proposed search schemes. In the last step, the best agent so far obtained is selected as the solution to the optimization problem.

Fig. 3
figure 3

Performance estimation of the suggested method

The superiority of the proposed technique is tested against ALO, SCA, SSA, and PSO using the 13 standard unimodal and multimodal benchmark functions. Every single algorithm is run 20 times for each benchmark function. The average and standard deviations of different benchmark functions for the ISCA, SCA, ALO, SSA, and PSO algorithms are shown in Table 1. The statistical Wilcoxon signed-rank test is carried out on the results. This is shown in Table 2 to confirm the superiority of the ISCA. From Tables 1 and 2, it is found that the ISCA outperforms other methods in eight functions (f_{1} ,f_{2} ,f_{3} ,f_{4} ,f_{5} ,f_{7} ,f_{10} ,f_{11}), while PSO outperforms other methods for (f_{6}), (f_{8} , f_{12}) functions, and SCA and ALO outperform other methods for (f_{9}) and (f_{13}) functions, respectively. Hence, the proposed method achieves better performance than the existing methods.

Table 1 Performance evaluation of ISCA over SCA, SSA, ALO, and PSO with the results of unimodal and multimodal benchmark test function
Table 2 Wilcoxon signed-rank test results on unimodal and multi-model functions indicating the inferior (-), superior ( +), or equivalent (≈), method in comparison to the proposed method

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Disclaimer:

This article is autogenerated using RSS feeds and has not been created or edited by OA JF.

Click here for Source link (https://www.springeropen.com/)