Attribute reduction is a method for eliminating redundant information while maintaining classification ability. Currently, the rough set is a commonly used method for attribute reduction. It uses concepts such as indiscernible relationships and positive domains to perform traversal analysis and judge different attributes to eliminate unnecessary attributes in a decision system [22,23,24]. The traditional reduction algorithm is inefficient, and the results of reduction are limited by prior information. As a heuristic intelligent algorithm, the genetic algorithm [25,26,27] can improve the efficiency of reduction and is not restricted by prior information. In our improved genetic algorithm, the dependence of decision attributes on conditional attributes is used as a fitness function and different combinations of condition attributes are used as genetic populations. Additionally, the simplest and most important condition attributes for decision attributes are obtained through selection and cross-mutation genetic operators. The structure of the genetic algorithm is schematically illustrated in Figure 2.

Figure 2
figure 2

Genetic algorithm structure

Theory of Improved Genetic Algorithm

In the genetic algorithm, the crossover probability ({p}_{c}) and mutation probability ({p}_{m}) have a significant impact on algorithm results. The values of ({p}_{c}) and ({p}_{m}) are static in the standard genetic algorithm. According to this algorithm principle, if the value of ({p}_{c}) is too large, the algorithm will generate new individuals quickly, but individuals with high adaptability will be destroyed. If the value of ({p}_{m}) is too small, then evolution speed will be slow. Additionally, if the value of ({p}_{m}) is large, then the algorithm convergence speed will be slow. In contrast, new individuals will be generated slowly and population diversity will be reduced if the value of ({p}_{m}) is small [28]. To address these issues, Cao et al. [29] proposed an adaptive genetic algorithm that introduces calculation formulas for the values of m and c according to the fitness value of each individual as follows:

$$p_c = {rm{ }}left{ {begin{array}{*{20}{c}} {frac{{{C_1}({F_{max }} – {F^prime })}}{{{F_{max }} – {F_{{rm{avg}}}}}}}&{{F^prime } ge {F_{{rm{avg}}}},}\ {{C_2}}&{{F^prime } < {F_{{rm{avg}}}},} end{array}} right.{rm{ }}$$

(1)

$${p_m} = {rm{ }}left{ {begin{array}{*{20}{c}} {frac{{{C_3}({F_{max }} – F”)}}{{{F_{max }} – {F_{{rm{avg}}}}}}}&{F” ge {F_{{rm{avg}}}},}\ {{C_4}}&{F” < {F_{{rm{avg}}}},} end{array}} right.$$

(2)

where ({F}_{max}) is the maximum fitness value of the population, ({F}_{avg}) is the average fitness value of the population, ({F}^{mathrm{^{prime}}}) is the larger fitness value among two crossed individuals, and ({F}^{mathrm{^{prime}}mathrm{^{prime}}}) is the fitness value of mutated individuals. C1, C2, C3, and C4 are parameters greater than zero and less than one.

The adaptive genetic algorithm adjusts the mutation and crossover probabilities according to individual fitness values, which improves the iteration speed of the algorithm. However, it does not consider the diversity and dispersion of the entire population [30,31,32]. Additionally, in the adaptive genetic algorithm, if the fitness value is close to or reaches the maximum value, then the mutation probability ({p}_{m}) will be close to zero. In this case, the ability to generate new individuals in the early stages of the algorithm is reduced and it easily falls into local optima [33]. Given the above two problems, we propose the following new calculation equations for the crossover and mutation probabilities of individuals whose fitness values are greater than Favg in Eq. (1):

$${p_c}, =, {k_1} times exp left( {frac{{sumnolimits_{i = 1}^N {left| {{F_i} – {F_{{rm{avg}}}}} right|} }}{{ – NDelta }}} right) + k2 times exp left( {frac{{F’ – {F_partial }}}{{{F_{max }} – {F_{{rm{avg}}}}}}} right),$$

(3)

$${p_m} ,=, {k_3} times exp left( {frac{{sumnolimits_{i = 1}^N {left| {{F_i} – {F_{{rm{avg}}}}} right|} }}{{ – NDelta }}} right) + {k_4} times exp left( {frac{{F” – {F_{{rm{avg}}}}}}{{{F_{max }} – {F_{{rm{avg}}}}}}}right),$$

(4)

where ({Delta ,=,F}_{mathrm{max}}-{F}_{mathrm{min}},{k}_{1})= ({omega }_{1}times {c}_{1},{k}_{2})= ({omega }_{2}times {c}_{2}) represent the adaptive control parameters. ({omega }_{1}) and ({omega }_{2}) are the weights of the influence of the population dispersion and individual fitness on the crossover probability and ({omega }_{1},) + ({omega }_{2},) = 1. When ({omega }_{1}) is zero, only the influence of the individual fitness value on the crossover probability is considered. When ({omega }_{1}) is one, only the population dispersion on the crossover probability is considered. The parameters for population dispersion and individual fitness are the same as those in Eq. (1).

In Eqs. (3) and (4), (frac{{sum }_{i=1}^{N}left|{F}_{i}-{F}_{mathrm{avg}}right|}{-Nbullet Delta }) represents the dispersion degree of the population. From these equations, one can see that if the population tends to be discrete, then the crossover probability increases and the mutation probability decreases to improve the ability of the population to develop excellent individuals. In contrast, when the crossover probability decreases, the mutation probability increases and the ability of the population to produce new individuals increases.

Selection, Crossover, and Mutation in the Improved Genetic Algorithm

Selection, crossover, and mutation are the core operations of the genetic algorithm. The roulette gambling method is adopted for the selection of individuals in the population. First, the selection probability of individuals is set according to the fitness function value such that an individual with a larger fitness function value is more likely to be selected. The candidate individuals for crossover and mutation are then selected from the initial population according to roulette gambling.

The fitness function used to evaluate the quality of an individual is key to selection from the population. In the process of attribute reduction using a genetic algorithm, the decision attribute has the greatest dependence on the reduced condition attribute set and the condition attribute set is minimized. The fitness function was defined as follows:

$$F = frac{{L – L_{text{r}}}}{L} + gamma_{C} (D),$$

(5)

$$gamma_{C} (D) = frac{{left| {{text{POS}}_{C} (D)} right|}}{left| U right|},$$

(6)

where ({L}_{r}) is the sum of the different digits in an individual’s chromosomes. If additional attributes are included in the reduced attribute set, then the greater the value of ({L}_{r}), the lower the fitness F. ({gamma }_{c}left(Dright)) is the dependence of decision attribute D on condition attribute C, which is the importance of the condition attribute set [34]. In Eq. (6), ({mathrm{POS}}_{C}left(Dright)) is the positive domain of D, and U is the entire domain.

Selected individuals from the population are randomly matched. A random selection of nodes in a pair of chromosomes must be located at the same position as the pair. For each node in the crossover process, there is a certain probability of replacement with a node in the same position as the paired chromosome. Mutation is the inversion of the binary code of individual chromosomes in the population. For candidate mutant individuals, each point has a certain probability of mutation in a chromosome. The probabilities of crossover and mutation are obtained using Eqs. (3) and (4), respectively.

A new population is generated through selection, crossover, and mutation. According to the optimal conservation strategy [35], the optimal individuals in the parent population are copied directly to the offspring population, replacing the individuals with the lowest adaptability among the offspring. The population size is kept constant.

Attribute Reduction Simulation Experiment

To test the effectiveness and feasibility of attribute reduction based on the improved genetic algorithm, it was used to analyze the basic features of signals and identify their categories. Harmonic, superposition, and noise signals were the objects to be recognized. The periodicity, maximum value, frequency component, and mean value were considered as recognition features. The simulated harmonic signal {x1, x2, x3}, harmonic superimposed signal {x4, x5, x6}, and noise signal {x7, x8, x9} were defined as follows:

$$left{ begin{gathered} x_{{1}} {text{ = sin(0}}{.5}t{ + 12),} hfill \ x_{{2}} {text{ = 2sin(0}}{.8}t{),} hfill \ x_{{3}} { = 0}{text{.5sin(}}t{ + 100),} hfill \ x_{{4}} {text{ = sin(3}}t{text{) + 2sin(}}t{ + 26),} hfill \ x_{{5}} {text{ = sin(}}t{) + 1}{text{.5sin(0}}{.5}t{ + 17) + 0}{text{.5sin(2}}t{ + 75),} hfill \ x_{{6}} {text{ = sin(}}t{) + 2}{text{.5sin(}}t{text{) + sin(}}t{ + 16),} hfill \ x_{{7}} {text{ = rand(1, N),}} hfill \ x_{{8}} {text{ = 3rand(1, N) + 0}}{text{.5sin(2}}t{),} hfill \ end{gathered} right.$$

(7)

where (x_j) is the amplitude of the simulated signal, and t is the time domain of the simulated signal.

From Figure 3, it can be inferred that if the simulated signal is periodic, it sets the attribute feature C1 = 1. Otherwise, C1 = 0. If its maximum value is greater than 2.5, then C2 = 1. Otherwise, it is zero. If only one frequency component is included, then C3 = 1. Otherwise, it is zero. if the overall mean value of the signal is zero, then C4 = 1. Otherwise, it is zero. The initial decision table is provided in Table 1.

Figure 3
figure 3
Table 1 Initial decision table for the simulation experiment

As shown in Table 1, C1, C2, C3 and C4 are the features of the signals. The numbers 1, 2 and 3 in column D represent the harmonic, superposition, and noise signals, respectively. After establishing the decision information table, the adaptive genetic algorithm and improved genetic algorithm were used to reduce the attributes of the decision table.

The first step in this process is to generate several binary individuals with chromosome length L randomly. An individual L chromosome represents L attributes in the decision table. If a chromosome is encoded as one, then the corresponding attribute of the chromosome is reserved. Otherwise, it is removed. For example, for a decision system with a conditional attribute C = {C1, C2, C3, C4}, chromosome {1100} represents an attribute set composed of C1 and C2.

MATLAB was used as the simulation software to program the adaptive genetic algorithm and improved genetic algorithm. The initial population size for the algorithms was set to five, and the probability control parameters of the adaptive genetic algorithm crossover and mutation were set as ({k}_{1}=1,{k}_{2}=1,{k}_{2}{=0.1,k}_{4}) = 1. Performance can be improved by the genetic algorithm control parameter:({k}_{1}=0.2, { k}_{2}=0.8,{k}_{3}{=0.02,k}_{4}) = 0.08.

The iteration termination conditions were set as follows. The number of iterations is 50; The fitness improvement is lower than the set threshold for every iteration. If the genetic algorithm satisfies either of these two termination conditions, then the optimal individual in the current population represents the optimal solution. The adaptive genetic algorithm and improved genetic algorithm were used for attribute reduction and the iterative process is illustrated in Figure 4.

Figure 4
figure 4

Attribute reduction based on the adaptive genetic algorithm and improve genetic algorithm

In Figure 4, the red “*” curve represents the optimal value searching iteration process of the adaptive genetic algorithm. The red “o” curve represents the average value obtained by searching through the iteration process of the adaptive genetic algorithm. The blue “*” curve represents the optimal value searching iteration process of the improved genetic algorithm. The blue “o” curve represents the average value searching iteration process of the improved genetic algorithm. As shown in Figure 4, in terms of optimal value optimization, the results of the two algorithms are approximately the same, and the optimal solution is obtained in the third iteration of each algorithm. The population average fitness of the improved genetic algorithm is optimized at the fifth iteration and that in the adaptive genetic algorithm is optimized in the eighth iteration. The optimization result of both algorithms is {1010} and the decision rule in Table 2 can be obtained by deleting repeated elements.

Table 2 Decision rule table for the simulation experiment

Figure 4 and Table 2 indicate that there is no significant difference between the two algorithms when the number of simulation experimental data is small. However, the population evolution speed of the improved genetic algorithm is significantly higher than that of the adaptive genetic algorithm. Attribute reduction can be performed based on the adaptive genetic algorithm, and the improved genetic algorithm can effectively realize the identification signal, which proves the effectiveness and feasibility of the proposed method

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Disclaimer:

This article is autogenerated using RSS feeds and has not been created or edited by OA JF.

Click here for Source link (https://www.springeropen.com/)