It has been verified that, when the input light was analyzed by an azimuthally (or a radially spatial modulator, the hourglass-shaped intensity pattern of the modulated light satisfies Malus’s law [21]. In other words, the gray distribution of the irradiance image, as shown in Fig. 1, is directly proportional to the square of the cosine of the angle between the azimuthal angle and the darkest direction. The darkest direction, which is parallel (or perpendicular) to polarization direction, has the minimum radial integral value of the image. To capture the darkest direction accurately and quickly, our method include three stages: coarse estimation, LRT and EC. They are introduced as follow.

### Coarse estimation

In our algorithm, the darkest direction is first coarsely estimated based on threshold segmentation. To reduce the computational complexity, threshold segmentation is processed on the pixels on the circles with certain radiuses rather than all the pixels in the image. Given a set of radiuses (e. g, (r_{1} ,r_{2} ,r_{3} , ldots ,r_{N})), the pixels on the circles with different radiuses are collected. Then, the pixels are divided into two parts (i.e., bright area and dark area) based on a predefined threshold (T). The average azimuthal angle of pixels in the dark area, denoted by (theta_{{text{c}}}), is treated as the coarse darkest direction, i.e.,

$$theta_{{text{c}}} = {text{mean}}(arg I(r,theta ) < T{kern 1pt} {kern 1pt} ),quad 3r = r_{1} ,r_{2} , ldots ,r_{N} ,theta in left[ {0^{ circ } ,180^{ circ } } right).$$

(1)

where (I(r,theta )) is the gray value of the pixel with the coordinate ((r,theta )).

### Local Radon transform

In this stage, Radon transform [25] is adopted to compute the integral of an image along specified directions. Suppose that (f) is a 2-D function, the integral of (f) along the radial line (l(theta_{i} ) = left{ {x,y:xsin theta_{i} – ycos theta_{i} = 0} right}) is given by

$$g(theta_{i} ) = intlimits_{ – infty }^{infty } {intlimits_{ – infty }^{infty } {f(x,y)delta (xsin theta_{i} – ycos theta_{i} ){text{d}}x{text{d}}y} } .$$

(2)

For digital images, Eq. (2) can be transferred as

$$g(theta_{i} ) = sumlimits_{d = – v}^{v} {sumlimits_{x} {sumlimits_{y} {I(x,y)W(xsin theta_{i} – ycos theta_{i} )} } } .$$

(3)

In Eq. (3), (I(x,y)) is the gray of the pixel with the rectangular coordinate ((x,y)). (W(xi )), the weight of the pixel ((x,y)) for integration along (l(theta_{i} )), can be obtained by

$$W(xi ) = frac{d – left| xi right|}{d}.$$

(4)

(d) is the distance threshold to determine whether the pixel ((x,y)) is on the line (l(theta_{i} )).

Obviously, GRT needs to compute the integral of the image along radial lines orientated from (0^{ circ }) to (180^{ circ }). Moreover, to have the accurate result, the angle interval that the GRT adopts should be as small as possible. Different from GRT, LRT only needs to capture the integral of the image in a local angle range, in which, the coarse darkest direction (i.e., (theta_{{text{c}}})) is taken as the center angle. For example, assuming the angle range and angle interval for LRT are (pm theta_{T}) and (theta_{s}), LRT is gotten while the radial integral values are arranged in azimuth order. It is (G(theta_{{text{c}}} ) = left{ {g(theta_{i} )} right};left( {theta_{i} = theta_{{text{c}}} – theta_{T} + (i – 1)theta_{s} ,i = 1,2 ldots ,2theta_{T} /theta_{s} + 1} right)).

As illustrated in Fig. 1, the actual darkest direction of the irradiance image is (25^{ circ }). As the image is disturbed by Gaussian white noise ((mu = sigma^{2} = 0.01)), the darkest direction calculated by coarse estimation is (25.06^{ circ })(the white solid line in Fig. 1), LRT is composed of the normalized integral values of the image along the radial lines (the white dotted line in Fig. 1) counterclockwise oriented from (145.06^{ circ }) to (85.06^{ circ }). Here, (theta_{T}) is set to be (60^{ circ }).

### Error correction

Theoretically, the darkest direction has the minimum value in LRT. It is regrettable that, the radial integral value of the image is always disturbed by the noise. For instance, the LRT of the image (shown in Fig. 1) is displayed in Fig. 2. The actual darkest direction of the image is (25^{ circ }), yet the direction that has the minimum value in LRT is (25.6^{ circ }). Apparently, the direction with the minimum value is not the actual darkest direction under the noise. To address this issue, EC is developed to explore the error of coarse estimation.

Assuming we have two modulate irradiance images (({text{Im}}_{1}) and ({text{Im}}_{2})) with hourglass-shaped gray distribution, and the darkest directions of two images are (theta_{d1}) and (theta_{d2}), respectively, (G_{1} (theta_{d1} – theta_{a} )) and (G_{2} (theta_{d2} – theta_{a} )) has the best correlation. That is,

$${text{corr}}(G_{1} (theta_{d1} – theta_{a} ),G_{2} (theta_{d2} – theta_{a} )) = mathop {max }limits_{theta in [0,pi )} [{text{corr}}(G_{1} (theta ),G_{2} (theta_{d2} – theta_{a} ))].$$

(5)

(G_{1} (theta_{d1} – theta_{a} )) and (G_{2} (theta_{d2} – theta_{a} )) denote the LRTs of ({text{Im}}_{1}) and ({text{Im}}_{2}) while (theta_{d1} – theta_{a}) and (theta_{{d_{2} }} – theta_{a}) are the centers of the local angle ranges for integration, i.e., (G_{1} (theta_{d1} – theta_{a} ) = { g_{1} (theta_{i} )} {kern 1pt} {kern 1pt} {kern 1pt} (theta_{i} = theta_{d1} – theta_{a} – theta_{T} + (i – 1)theta_{s} )), and (G_{2} (theta ) = { g_{2} (theta_{i} )} {kern 1pt} {kern 1pt} {kern 1pt} (theta_{i} = theta – theta_{T} + (i – 1)theta_{s} )). Similarly, (G_{1} (theta )) is the LRT of the image ({text{Im}}_{1}) while the center of the local integral angle range is (theta). (theta_{a}) is an arbitrary angle.

Let the coarsely estimated darkest direction for ({text{Im}}_{2}) is (theta_{c}), the error of the coarse estimation is (theta_{e}). From Eq. (5), we can infer that, the LRT of ({text{Im}}_{1}) that has the best correlation with (G_{2} (theta_{c} )) is (G_{1} (theta_{d1} – theta_{e} )). This inference can be represented as

$${text{corr}}(G_{1} (theta_{d1} – theta_{e} ),G_{2} (theta_{c} )) = mathop {max }limits_{theta in [0,pi )} [{text{corr}}(G_{1} (theta ),G_{2} (theta_{c} ))].$$

(6)

In Eq. (5), the range for (theta) is ([0,pi )). In fact, the optimal (theta) fluctuates around (theta_{d1}) as a result of the small error of coarse estimation. To reduce calculation, the range for (theta) can be decreased to (left[ {theta_{d1} – theta_{M} ,theta_{d1} + theta_{M} } right]). Substituting (theta_{e} = theta_{d2} – theta_{c}) into Eq. (5), we can have

$$theta_{e} = theta_{d1} – mathop {arg {kern 1pt} max }limits_{{theta in [theta_{d1} – theta_{M} ,theta_{d1} + theta_{M} ]}} [{text{cor}}r(G_{1} (theta ),G_{2} (theta_{c} ))].$$

(7)

Equation (6) explores the link between the error of coarse estimation and the correction between LRTs. Based on Eq. (6), the actual darkest direction of ({text{Im}}_{2}) can be captured by

$$theta_{d2} = theta_{c} + theta_{d1} – mathop {arg {kern 1pt} max }limits_{{theta in [theta_{d1} – theta_{M} ,theta_{d1} + theta_{M} ]}} [{text{corr}}(G_{1} (theta ),G_{2} (theta_{c} – theta ))].$$

(8)

In Eq. (7), the range for (theta) is ([0,pi )). In fact, the optimal (theta) fluctuates around (theta_{d1}) as a result of the small error of coarse estimation. To reduce calculation, the range for (theta) can be decreased to (left[ {theta_{d1} – theta_{M} ,theta_{d1} + theta_{M} } right]).

In practice, according to Malus’s law, ({text{Im}}_{1}) can be generated and treated as the model image. Apparently, LRTs of ({text{Im}}_{1}) also satisfies Malus’s law. That is, the integral of the image at the direction (theta_{i}) is

$$g(theta_{i} ) = Acos^{2} left( {theta_{i} – theta_{d1} + frac{pi }{2}} right).$$

(9)

(A) is a coefficient decided by the image brightness. (theta_{d1}) is the darkest direction of the image. Depending on Eq. (8), a set of LRTs of ({text{Im}}_{1}) (i.e., (G_{1} (theta_{i} )),(theta_{i} = theta_{d1} – theta_{M} + (i – 1)theta_{r})) can be gotten. For the input image ({text{Im}}_{2}), substituting (G_{2} (theta_{c} )) into Eq. (7), the corrected darkest direction can be captured.

Taking the image in Fig. 1 as an example, the working mechanism is illustrated in Fig. 3. In this experiment, the darkest direction of the model image is (0^{ circ }). According to Eq. (8), a set of LRTs of the model image are generated while the center angle changes from (- 5^{ circ } (175^{ circ } )) to (5^{ circ }), in (0.01^{ circ }) increment. For the input image shown in Fig. 1, the predicted darkest direction estimated by coarse estimation is (theta_{{text{c}}} = 25.6^{ circ }). In EC stage, we found that, (G_{1} (0.6^{ circ } )) has the best correlation value with (G_{2} (theta_{c} )). *G*_{1}(□) and *G*_{2}(□) denote the LRTs of the model image and the input image, respectively. Finally, according to Eq. (7), the estimated darkest direction of the input image is corrected to be (25^{ circ }).

### Implementation details

In practice, once the parameters of the algorithm are given, some intermediate data including the coordinates of pixels used for the coarse estimation, the coordinates and weights of pixels for LRT computation keep unchanged while different input images are treated. Hence, these data can be computed ahead and saved in tables which are named as circle pixel coordinate table (CPCT), integral pixel coordinate table (IPCT), and integral pixel weight table (IPWT), respectively. It should be noted that, due to the different darkest direction of the input images, the coordinates and weights of pixels for gray integration should be saved while the azimuth angle changes from (0^{ circ }) to (180^{ circ }). In addition, the LRTs of the model images with different center angles, which are independent to the input image, also can be captured offline using Eq. (8) and saved.

The flow chart and pseudo code of our method are shown in Fig. 4.

## Rights and permissions

**Open Access** This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

##### Disclaimer:

This article is autogenerated using RSS feeds and has not been created or edited by OA JF.

Click here for Source link (https://www.springeropen.com/)