For the mobile robots with load tasks or docking tasks, they need to move autonomously from initial pose to the target position to achieve docking with the mechanism in the unknown complex environment. This requires the mobile robots to have autonomous positioning and navigation functions. The selection of the navigation mode and sensors are mutually restricted. Many factors need to be considered, including actual application requirements, cost, positioning accuracy, and real-time performance. With the development of electronic technology and sensor technology, there are more and more guidance methods of mobile robots, including infrared, laser, inertia, ultrasonic, vision, magnetic and so on. In order to realize the autonomous positioning in a large range, different sensors should be selected under different working conditions.
According to the distance between the mobile robot and the target position, it can be divided into three situations: long distance (R > 0.5 m), medium distance (0.2 m < R < 5 m) and short distance (R < 0.2 m). Under different distance conditions, there are different application requirements and precision requirements. Because the mobile robots are working in the outdoor environment, they will be exposed to the direct sunlight, so the camera, the laser guidance and the infrared guidance are not applicable. Therefore, this paper proposes the following hybrid navigation system to achieve autonomous positioning in Figure 1.
The docking mechanism consists of a lifting platform and a locking mechanism. The lifting platform has four separate retractable outriggers to fit for the uneven road in the outdoor environment. It is shown in Figure 2.
The locking mechanism is mounted on the top of the mobile robot, which has four locking blocks to match the docking area in the lifting platform. The four locking blocks are driven synchronously to align the rotation centers of the mobile robot and the lifting platform. The docking state is shown in Figure 3.
Autonomous Positioning
The Beidou navigation satellite system and the gyroscope are used to locate the mobile robot in a long distance. The Beidou system adopts China geodetic coordinate system and has the characteristics of all-weather and global. Based on the position coordinate data of known points in the same coordinate system space, the coordinate values of unknown points are calculated by the method of space distance intersection. Thus, it can locate and navigate moving objects in real-time and transmit relevant information quickly. The gyroscope determines the orientation of the mobile robot and is installed in the middle line of the body. Due to the large motion range and long driving distance, there is no strict limitation on the motion accuracy and trajectory except the signal stability, which can guarantee the mobile robot pass through each task point and ensure safety.
In practice, the positioning accuracy of the Beidou system is about 5 m. Therefore, the ultrasonic sensor positioning method of dual emission and dual receiver is introduced when it is in the medium distance as shown in Figure 4(a). The ultrasonic signal transmitters are fixed at the target position (points A and B). Because the beam angle of the ultrasonic signal is limited, each point has two identical transmitters to increase coverage area, which are represented by the dotted and solid lines in the figure respectively. And the four receivers are installed at the front of the mobile robot to receive ultrasonic signals (points C and D). The ultrasonic frequency is 40 kHz, and the transmission frequency is 20 Hz. When the distance between the two is within the range of 5 m, stable ultrasonic signal transmission can be carried out. At this time, the Beidou system and gyroscope sensor will no longer be worked.
To unify time reference for transmitters and receivers, a 2.4 G signal source is introduced. The signal transmitter is fixed on the middle of the target position as shown in Figure 4(b). Since the signal travels at the speed of light, much faster than the ultrasonic signal, the propagation time of the 2.4 G signal can be ignored. Therefore, the transmitting time and receiving time can be regarded as the same time, which is used as the time reference.
By measuring the transmission delay of the ultrasonic signal, namely signal arrival time ti (i = 1, 2, 3, 4), the distances si (i = 1, 2, 3, 4) between each point are obtained. Then the coordinate position information of the receiving points can be solved according to Eq. (1):
$$begin{gathered} left{ {begin{array}{*{20}l} {x_{1} = frac{{left( {x_{b}^{2} – x_{a}^{2} } right) – left( {ct_{3} } right)^{2} + left( {ct_{1} } right)^{2} }}{{2left( {x_{a} – x_{b} } right)}},} \ {y_{1} = y_{a} + sqrt {left( {ct_{1} } right)^{2} – left( {x_{1} – x_{b} } right)^{2} } ,} \ end{array} } right. hfill \ left{ {begin{array}{*{20}l} {x_{2} = frac{{left( {x_{b}^{2} – x_{a}^{2} } right) – left( {ct_{4} } right)^{2} + left( {ct_{2} } right)^{2} }}{{2left( {x_{a} – x_{b} } right)}},} \ {y_{2} = y_{a} + sqrt {left( {ct_{2} } right)^{2} – left( {x_{2} – x_{b} } right)^{2} } ,} \ end{array} } right. hfill \ end{gathered}$$
(1)
and the orientation of the mobile robot can be realized by Eq. (2):
$$theta = frac{{uppi }}{2} + arctan frac{{y_{2} – y_{1} }}{{x_{2} – x_{1} }}.$$
(2)
Due to the problems of data loss and instability in ultrasonic signal detection, the positioning error will be large. Therefore, an improved median filtering algorithm is introduced to avoid the situation as shown in Figure 5. First, a queue is set up to record data for the past n times and sort the data from small to large, where n is an odd number for easy calculation. Then the mid-value is taken as the output ultrasonic signal, but if it is equal to zero, the maximum value is the output signal.
When the distance is less than 0.2 m, the ultrasonic signal is interrupted because of the limit of the beam angle. A stripe expansion device is designed to connect the positioning signal between medium and short distance, as shown in Figure 6. The docking mechanism is higher than the mobile robot, the latter needs to move below the former to meet the motion constraints, and then achieve docking locking. Therefore, the precise positioning mode of magnetic navigation and photoelectric switch is adopted to reach the target position in short distance, as shown in Figure 7. Figure 8 demonstrates the docking process in short distance. When the distance reach 0.2 m in Figure 8(a), the mobile robot stops, waiting for the magnetic stripe to expand (line AF) and trigger the magnetic navigation sensor at point A.
The principle of magnetic guidance sensor is to detect the magnetic field signal of the preset magnetic track to achieve the positioning and navigation. The path of the magnetic stripe is usually fixed. In this paper, the magnetic stripe is installed reversely under the docking mechanism so that it can move with the mechanism, extending the scope of application. When the mobile robot enters the docking module, the magnetic sensor continues working to correct the position error.
Photoelectric switch detected the object by using the occlusion of the detected object to the beam, which has high precision, fast response and strong anti-interference ability, and it is not affected by the magnetic field.
When the photoelectric switch is triggered by the raised marker in the end of the mobile robot, the mobile robot stops moving forward immediately as shown in Figure 8(b).
According to the parameters defined in Figure 8, the relative position relationship between robot and docking mechanism can be obtained by Eq. (3):
$$begin{gathered} left{ {begin{array}{*{20}l} AD = b/2 = 150,\ CD = ADcos theta = bcos theta /2, \ end{array} } right. hfill \ therefore Delta d = bleft( {1 – cos theta } right)/2, hfill \ end{gathered}$$
(3)
where b represents the half car width, s is the fault tolerance distance, and point O is the center point of the mobile robot. (theta) and (Delta d) indicate the orientation and position error corrections respectively. The goal of motion control is to make (theta) and (Delta d) equal to zero to realize the center points coincidence.
Kalman Filtering
Kalman filter is a recursive linear quadratic estimation. It combines measurement and prediction with statistical noise and error information to estimate the target variables optimally. Its recursive process is more suitable for computers and can guarantee real-time performance. Kalman filter has a wide range of applications, including vehicle and aircraft navigation, signal processing and robot control. In addition, there are extended Kalman filter and unscented Kalman filter in nonlinear stem.
The process of ultrasonic ranging is affected by the errors such as the size of sound velocity, time measurement and Gaussian white noise interference. In this work, the Kalman filter is introduced to improving the filtering efficiency and measurement accuracy, and to optimizing the ranging process.
It can be seen from Eq. (1) that the relationship between distance values and the coordinates is nonlinear, and the extended Kalman filter algorithm is usually used to optimize the nonlinear system. In order to simplify the filtering process and improve the calculation efficiency, the measured four distances information si (i = 1, 2, 3, 4) is equivalent transformed to the status information of the midpoint of the mobile robot, as shown in Figure 9. The nonlinear process is transformed into linear process and the standard Kalman filter algorithm is used to realize filtering.
The Kalman filter consists of two processes: prediction and update. In the prediction process, the current prediction value and the corresponding covariance are obtained based on the previous target value and state equation. Based on this, the update is combined with the measurement value to obtain a new target value and covariance to achieve recursion. The filtering process is shown in Figure 10. In addition, the prediction and measurement errors of the system are assumed to be Gaussian white noise.
The positioning system state vector is expressed as ({varvec{X}}_{t} = left( {x_{t} ,v_{xt} ,y_{t} ,v_{yt} } right)). (hat{user2{X}}_{t}^{ – }) and (hat{user2{X}}_{t}) are defined as the predicted value resulting from the calculation result of the last iteration and the best estimate generated from the current calculation result respectively. Since it is a recursive calculation, an initial value needs to be set. Generally, the first measurement value is used as a target value, and after that, the prediction value and the estimated covariance can be predicted:
$$left{ {begin{array}{*{20}l} {hat{user2{X}}_{t}^{ – } = {varvec{F}}_{t} cdot hat{user2{X}}_{t – 1} + {varvec{B}}_{t} {varvec{u}}_{t} ,} hfill \ {{varvec{P}}_{t}^{ – } = {varvec{F}}_{t} {varvec{P}}_{t – 1} {varvec{F}}_{t}^{rm T} + {varvec{Q}},} hfill \ end{array} } right.$$
(4)
where Ft is the state transition matrix; Bt is the control matrix; ut is system input; Q is the covariance matrix of the observed noise. The system state equation for the measurement:
$${varvec{Z}}_{t} = {varvec{H}} cdot {varvec{X}}_{t} + {varvec{v}}_{t} ,$$
(5)
where ({varvec{Z}}_{t} = left[ {hat{x}_{t} ,hat{y}_{t} } right]^{rm T}) is the measured value; vt is measure noise and satisfies the (Nleft( {0,{varvec{R}}} right)) distribution; H is the measure matrix.
Then the Kalman gain and the update equation are obtained from Eq. (6).
Combined with the ultrasonic signal, using the acceleration ({varvec{u}} = left[ {begin{array}{*{20}c} {a_{x} } & {a_{y} } \ end{array} } right]^{rm T}) as the system input to make predictions and the position and speed change (left[ {Delta x_{t} ,Delta v_{xt} ,Delta y_{t} ,Delta v_{yt} } right]^{rm T}) as the observed value, the state equation can be expressed in Eq. (7).
$$left{ {begin{array}{*{20}l} {{varvec{K}}_{t} = {varvec{P}}_{t}^{ – } {varvec{H}}^{rm T} left( {{varvec{HP}}_{t}^{ – } {varvec{H}}^{rm T} + {varvec{R}}} right)^{ – 1} ,} hfill \ {hat{user2{X}}_{t} = hat{user2{X}}_{t}^{ – } + {varvec{K}}_{t} left( {{varvec{Z}}_{t} – user2{Hhat{X}}_{t}^{ – } } right),} hfill \ {{varvec{P}}_{t} = {text{cov}} left( {{varvec{X}}_{t} – hat{user2{X}}_{t} } right) = left( {{varvec{I}} – {varvec{K}}_{t} {varvec{H}}} right){varvec{P}}_{t}^{ – } .} hfill \ end{array} } right.$$
(6)
Simulation analysis is performed to verify the effectiveness of the Kalman filter. Set motion parameters (v_{x} = 0.5{text{ m/s}}, , v_{y} = 0.3{text{ m/s}}, , a_{x} = 0.02{text{ m/s}}^{2} , , a_{y} = 0.04{text{ m/s}}^{2}) from the origin point at time zero and the initial input of the mobile robot ({varvec{X}}_{0} = left[ {0.5,0.5,0.8,0.3} right]^{rm T}). Then the initial value of the covariance matrix is determined.
$$hat{user2{X}}_{t}^{ – } = left[ {begin{array}{*{20}c} 1 & {Delta t} & 0 & 0 \ 0 & 1 & 0 & 0 \ 0 & 0 & 1 & {Delta t} \ 0 & 0 & 0 & 1 \ end{array} } right]_{t} cdot hat{user2{X}}_{t – 1} + left[ {begin{array}{*{20}c} {frac{1}{2}Delta t^{2} } & 0 \ {Delta t} & 0 \ 0 & {frac{1}{2}Delta t^{2} } \ 0 & {Delta t} \ end{array} } right]{varvec{u}}_{t} .$$
(7)
The actual value and filtered value of position and velocity in x, y directions are shown in Figure 11. It can be seen from the figure that the tracking effect is good with large initial error, quickly approaching the real state.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
Disclaimer:
This article is autogenerated using RSS feeds and has not been created or edited by OA JF.
Click here for Source link (https://www.springeropen.com/)