Weighted central moments in pattern recognition

Weighted central moments in pattern recognition

Pattern Recognition Letters 21 (2000) 381±384 www.elsevier.nl/locate/patrec Weighted central moments in pattern recognition Ivar Balslev *, Kasper D...

196KB Sizes 0 Downloads 38 Views

Pattern Recognition Letters 21 (2000) 381±384

www.elsevier.nl/locate/patrec

Weighted central moments in pattern recognition Ivar Balslev *, Kasper Dùring, Rene Dencker Eriksen Maersk Mc-Kinney Moller Institute for Production Technology, SDU ± Odense University, Campusvej 55, DK5230 Odense M, Denmark Received 10 May 1999; received in revised form 17 November 1999

Abstract We consider Cartesian moments for generating translation and rotation invariant descriptors in two-dimensional pattern recognition. In a number of situations the noise tolerance can be improved by introducing a spatial weight function. In rotation independent classi®cation this function compensates for the fact that simple central moments of high orders give low weight to areas near the center-of-mass. We brie¯y discuss the advantage of weighted moments in the determination the rotation angle. The method reported here has been applied successfully in a case study involving a classi®cation of LEGOâ bricks subject to robot handling. Ó 2000 Elsevier Science B.V. All rights reserved. Keywords: Moment invariants; Noise tolerance; Rotation invariance

1. Introduction The use of Cartesian moments is powerful in pattern recognition and orientation determination in two dimensions (Wong and Hall, 1978; Gonzales and Woods, 1993). Such methods are superior to direct spatial correlation methods due to high speed. In the present communication we look at a combined problem, namely rotation/translation independent classi®cation and orientation determination of image elements. Let us ®rst consider the classi®cation. The standard recipe for using Cartesian moments is the following: the center-of-mass coordinates are found from the zeroth- and ®rst-order moments. These coordinates are used for generating central moments, thereby obtaining translation invariants.

*

Corresponding author. Tel.: +45-65-57-35-21; fax: +45-6615-76-97. E-mail address: [email protected] (I. Balslev).

Rotation invariant quantities can be obtained by using Hu type moment invariants (Hu, 1962; Li, 1991) or applying the principal axes method (Balslev, 1998). However, classi®cation with extensive use of central moments becomes inaccurate if the shape variation among classes is concentrated near the center-of-mass of the object. This can be seen by considering the de®nition of central moments, lpq ˆ

X p q …x ÿ xo † …y ÿ yo † ;

…1†

x;y

where …x; y† is the object point in the summation, and …xo ; yo † is the center-of-mass. The structure of Eq. (1) implies that noise from regions far from the center-of-mass can easily destroy essential information from regions close to the center-of-mass. The seven Hu type moment invariants (Hu, 1962) are expressed by central moments of rather high order (p ‡ q equal to 2 or 3) and so the little information about the region near the center-of-mass

0167-8655/00/$ - see front matter Ó 2000 Elsevier Science B.V. All rights reserved. PII: S 0 1 6 7 - 8 6 5 5 ( 0 0 ) 0 0 0 0 5 - 2

382

I. Balslev et al. / Pattern Recognition Letters 21 (2000) 381±384

is transferred to the Hu moment invariants. The rotation invariant zeroth-order moment, i.e., the area, does not have this disadvantage, but this quantity is insucient for classi®cation in a number of situations, such as in scale independent classi®cation or if the di€erent classes happen to be almost equal areas. In Section 2, we introduce a suitable spatial function into Eq. (1) so that the regions near the center-of-mass are given higher weights. In the orientation determination, the rotation angle can be expressed by the second-order moments and a single third-order moment. This method has a singularity which in special cases leads to large uncertainties from noise. In Section 3, we discuss the increased noise tolerance obtained by means of weighted central moments. 2. Classi®cation using weighted moments Let us introduce weighted central moments as follows:  p  q X m m01 F …x; y† x ÿ 10 y ÿ ; …2† lpq ˆ m00 m00 x;y where mpq ˆ

X

F …x; y†xp y q :

…3†

x;y

In order to enhance the weights of the regions near the center-of-mass, we use a weight function F …x; y† with the form of a two-dimensional Lorentzian given by F …x; y† ˆ



1 2

1 ‡ a2 …x ÿ xo † ‡ …y ÿ yo †

2

;

…4†

and a is an adjustable parameter. We choose rotational symmetry here, since we are considering rotation independent classi®cation. F …x; y† goes for large distances from …xo ; yo † as the distance to the power minus 2, and so it is well suited for the desired compensation. Note that …xo ; yo † is the center-of-mass derived from unmodi®ed zeroth- and ®rst-order moments.

Note also that Hu type quantities derived from the weighted moments lpq are still rotation invariant. The typical range for a is 0 < a < 10=RG ; where RG is the radius of gyration, r l20 ‡ l02 : …5† RG ˆ l00 We have tested the weighted moments in a case where classi®cation using unmodi®ed moments has failed. We show in Fig. 1 ®ve LEGOâ bricks used as windows in a toy house. The silhouettes are seen to have left and right versions. 1 In the production plant it is relevant for the robot handling to classify the left and the right version. Unfortunately, six of the seven Hu invariants are useless in this case as they are invariant for the mirror transformation. Only one Hu invariant derived from moments with p ‡ q 6 3 can be used, namely, h /7 ˆ …3l21 ÿ l03 †…l30 ‡ l12 †  …l30 ‡ l12 †2 i 2 ÿ 3…l21 ‡ l03 † ‡ …3l12 ÿ l30 †…l21 ‡ l03 † h i 2 2 …6†  3…l30 ‡ l12 † ÿ …l21 ‡ l03 † : /7 changes sign during a mirror transformation. Due to noise, we could not use the sign of /7 for distinguishing left and right versions in Fig. 1. Instead we use modi®ed moments de®ned in Eq. (2). The structure of the applied weighting is illustrated in Fig. 2 for the same image components as shown in Fig. 1. Here the gray tone is proportional to the weight function. We have calculated /7 de®ned by Eq. (6) with simple central moments replaced by weighted ones, and used the sign of this quantity in a left±right classi®cation of image components. Treating 20 images of LEGO bricks similar to those in Figs. 1 and 2 we found the following error frequencies. It is seen that the unmodi®ed moment invariant (a ˆ 0† gives random results, and that an errorfree classi®cation is obtained for values of a in the range 3±7Rÿ1 G . Higher values give increasing error 1

In fact the ®ve blocks have identical shape in three dimensions, but are distinct with respect to which side is facing the camera.

I. Balslev et al. / Pattern Recognition Letters 21 (2000) 381±384

a

383

0

Rÿ1 G

2Rÿ1 G

3±7Rÿ1 G

8Rÿ1 G

9Rÿ1 G

50%

40%

25%

0%

15%

20%

frequency. The reason for the above ®ndings is that the left±right speci®c information is concentrated in a region near the center-of-mass. In this region which has a radius of about 0.2RG ; the conventional Hu based method (a ˆ 0) is almost blind. For very large values of a the weighting is

focussed on a very small region which does not have left±right speci®c information. Note that the range giving error-free classi®cation has a comfortable width. The logarithmic width is a factor 2, which indicates robust classi®cation, if a is chosen in the middle of this range. In industrial applications the value of a should be chosen during training of the machine vision system in order to obtain a reasonable balance between information far from and near the centerof-mass. If a reliable distinction between all classes is not obtained by one single value a, then one can introduce a tree type procedure. For example, one can perform a left±right independent classi®cation using one value of a, and proceed to a left±right classi®cation using other values of a speci®c for each mirror symmetric pair.

3. Rotation angle determination Fig. 1. Machine vision image of ®ve LEGO bricks subject to image analysis with the view of robot handling.

In addition to the classi®cation mentioned above, we needed in the actual robot vision project to determine the orientation about the viewing direction. Using the second-order moments the orientation angle h is given by tan 2h ˆ

2l11 : l02 ÿ l20

…7†

The angle h refers to a reference object having central moments lopq , oriented so that lo11 ˆ 0: This equation has four solutions between 0 and 2p. Let gpq …h† be the central moments of the actual object rotated h. gpq …h† is expressed by the original central moments and the angle h. The following two conditions were used by Balslev (1998) for removing the ambiguity: Fig. 2. Image components illustrating the weight function used. The value of a used here is 1=12 …pixelsÿ1 † and the radius of gyration is about 45 pixels.

g20 …h† > g02 …h†;

…8†

g21 …h† > 0;

…9†

384

I. Balslev et al. / Pattern Recognition Letters 21 (2000) 381±384

provided that the reference object is oriented so that lo11 ˆ 0; lo20 > lo02 and lo21 > 0: There is a singularity implied in Eqs. (7)±(9). If lo20 ÿ lo02 and lo21 are close to zero, then the angle determination becomes uncertain due to noise. This situation was detected for the image of boldface `a' in the font `New Courier' (Balslev, 1998). If the above-mentioned annoying situation is accidental, then the introduction of weighted moments is useful, because the weighting is able to shift the set of moments away from the singularity. Again we choose weight functions which are rotationally symmetric about the center-of-mass. Useful weighting can be obtained by either centerenhancing functions such as F …x; y† of Eq. (4) or a ÿ1 center-suppressing functions such as …F …x; y†† . The center-enhanced moments have an advantage over the center-suppressed ones if the orientation speci®c image elements are concentrated near the center-of-mass. The choice between these two functions and the choice of a should be made during training, for example maximizing for each class the absolute relative di€erence between the eigenvalues of the second-order moment matrix, o o o i.e., jlo 20 ÿ l02 j=…l20 ‡ l02 † in an orientation giving ˆ 0. In order to neutralize the in¯uence of the lo 11 singularity at lo ˆ 0; it is recommended to let self21 adjusting algorithms ®nd the numerically largest o o o of the third-order moments lo 30 ; l21 ; l12 ; and l03 : This should be used in Eq. (9) in order to remove the (hjh ‡ p† ambiguity. The choice of the decisive third-order moment as well as the weight function should be speci®c for each class. 4. Conclusion and outlook We have shown that the weight function applied in the calculation of Cartesian moments is able to considerably increase the noise tolerance in rotation/translation independent pattern recognition and determination of orientation. We use rotation symmetric functions centered about the center-of-mass and suggest very simple functions F …x; y† of Eq. (3) or …F …x; y††ÿ1 with only one adjustable parameter a: The method can be devel-

oped further by considering a wider range of functions, for example, F …x; y† ˆ

1   2 2 1 ‡ a2 …x ÿ xo † ‡ …y ÿ yo † ÿ b2

…10†

which enhances regions near a circles with radius b (provided that a is not very small). It is also important to note that gray tone images can be treated using weighted moment invariants. In industrial applications the choice of function and parameter values should be made during training, using either general automatic optimization or tailor-made algorithms speci®c for the class of objects considered. Further research is needed for ®nding general methods in which weight function parameters optimizing the noise tolerance can be derived automatically from the training set.

Acknowledgements The authors are grateful to John Immerkñr and Brian Kirstein Ramsgaard for fruitful discussion during the course of this work. In the present analysis we used the open-source image processing library IPL98 (http://www.mip.sdu.dk/ipl98), the development of which has been ®nanced by Odense Steel Shipyard and the Maersk Foundation.

References Balslev, I., 1998. Noise tolerance of moment invariants in pattern recognition. Pattern Recognition Letters 19, 1183± 1189. Gonzales, R.D., Woods, E., 1993. Digital Image Processing. Addison-Wesley, Reading, MA. Hu, M.K., 1962. Visual pattern recognition by moment invariants. IEEE Transactions on Information Theory 8, 179±187. Li, Y., 1991. Reforming the theory of invariant moments for pattern recognition. Pattern Recognition 25, 723±783. Wong, R.Y., Hall, E.L., 1978. Scene matching with invariant moments. Computer Graphics Image Processing 8, 16±24.