Measurement Science and Technology

**PAPER • ****OPEN ACCESS**

Development and calibration of an accurate 6-degree-of-freedom measurement system with total station

To cite this article: Yang Gao *et al *2016 *Meas. Sci. Technol. ***27 **125103

View the article online for updates and enhancements.

Related content

– Integrated calibration of a 3D attitude sensor in large-scale metrology

Yang Gao, Jiarui Lin, Linghui Yang et al.

– Calibration method for a vision guiding-based laser-tracking measurement system Mingwei Shao, Zhenzhong Wei, Mengjie Hu et al.

– Application of a self-compensation mechanism to a rotary-laser scanning measurement system

Siyang Guo, Jiarui Lin, Yongjie Ren et al.

Recent citations

– A Multi-constraints Based Pose Coordination Model for Large Volume Components Assembly

Dian Wu and Fuzhou Du

– Determine turntable coordinate system considering its non-orthogonality Xiaoting Guo *et al*

– An orthogonal iteration pose estimation algorithm based on an incident ray tracking model

Changku Sun *et al*

This content was downloaded from IP address 109.95.53.194 on 03/01/2020 at 09:35

Measurement Science and Technology

Meas. Sci. Technol. **27 **(2016) 125103 (11pp) doi:10.1088/0957-0233/27/12/125103

**Development and calibration of an accurate 6-degree-of-freedom measurement system with total station**

**Y****ang Gao, Jiarui Lin, Linghui Yang and Jigui Zhu**

State Key Laboratory of Precision Measuring Technology and Instruments, Tianjin University, Tianjin 300072, People’s Republic of China

E-mail: linjr@tju.edu.cn

Received 5 July 2016, revised 27 September 2016 Accepted for publication 30 September 2016 Published 28 October 2016

**Abstract**

To meet the demand of high-accuracy, long-range and portable use in large-scale metrology for pose measurement, this paper develops a 6-degree-of-freedom (6-DOF) measurement system based on total station by utilizing its advantages of long range and relative high accuracy. The cooperative target sensor, which is mainly composed of a pinhole prism,

an industrial lens, a camera and a biaxial inclinometer, is designed to be portable in use. Subsequently, a precise mathematical model is proposed from the input variables observed by total station, imaging system and inclinometer to the output six pose variables. The model must be calibrated in two levels: the intrinsic parameters of imaging system, and the rotation matrix between coordinate systems of the camera and the inclinometer. Then corresponding approaches are presented. For the first level, we introduce a precise two-axis rotary table as

a calibration reference. And for the second level, we propose a calibration method by varying the pose of a rigid body with the target sensor and a reference prism on it. Finally, through simulations and various experiments, the feasibilities of the measurement model and calibration methods are validated, and the measurement accuracy of the system is evaluated.

Keywords: calibration, large-scale metrology, pose measurement, 6-DOF measurement, total station

(Some figures may appear in colour only in the online journal)

**1****. Introduction**

Large-scale metrology (LSM) has become a routinely used tool in the manufacturing and engineering associated with large objects such as radio antennae, aircrafts, ships and tunnel boring machines [1, 2]. Considerable attention in the area of LSM has been focused on the pose measurement, which needs 6-degree-of-freedom (6-DOF) to track and control the complex motions of a target in accelerator alignment, tunnel boring machine guidance, large objects assembly, robotics and logistic industries [3–6]. The practical applications need

Original content from this work may be used under the terms of the Creative Commons Attribution 3.0 licence. Any further

distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI.

the sensor to have a high-accuracy of sub-millimeters even from a distance of more than 100 m, to be robust to the com-plex environment, and to be portable in use.

There have been many designs and instruments developed for 6-DOF measurement in LSM. One available method is to measure the coordinates of multiple target points by any 3D measurement instruments in this field, and for 6-DOF motions, at least three units are required. However, this multi-points method is inconvenient in practice, and since each point is measured one by one, it is not efficient or real-time. Another technique is the well-known perspective-n-point (PnP) problem [7] in the photogrammetry area, which uses a single camera for a 6-DOF measurement [8, 9]. However, with the limitation of camera resolution and field of view, the PnP technique cannot achieve high-accuracy in large space.

0957-0233/16/125103+11$33.00 1 © 2016 IOP Publishing Ltd Printed in the UK

Meas. Sci. Technol. **27 **(2016) 125103

**Figure 1. **Configuration of the PMTS.

Liu *et al *introduce a 6-DOF measurement method using multi-targets with a single-station instrument similar to an indoor GPS [10]. For the principle is very like the PnP technique, their disadvantages remain to be solved. Besides, some 6-DOF probes based on laser trackers are developed by commercial companies like Leica and API, which have excellent accura-cies [11]. The scenario from Leica uses a motorized camera on the tracker station to track the multi-points on the probe, while API invents a cooperative sensor with a pinhole prism whose vertex is cut to form a light channel and with a PSD in the back to calculate the incident angle of the tracker beams. However, laser trackers are extremely costly and not useful in outdoor environments. Stephen Kyle also discussed plenty of designs in 6-DOF probing cooperated with laser trackers or total stations [12], but he has not processed any further study. Furthermore, many combined measurement scenarios using cameras, laser sensors, structured lights and even motorized stages have also been proposed [13–17]. Despite meeting their own requirements, these methods are not good enough to per-form accurately in long-range measurement.

As described above, most of the 6-DOF measurement sys-tems are mainly composed of two parts: a base station (or multi-stations) which is always stationary and a cooperative target sensor which is placed on the target. Laser trackers, indoor GPS, cameras and total stations are competent to act as base measurement stations. Of all these traditional instruments in LSM, total station is a popular tool in both industries and outdoor engineering, showing its advantages in flexibility, efficiency in measurement, low on-site calibra-tion requirements, long range (at least hundreds of meters) and relatively high accuracy (1mm level). Therefore, based on the aforementioned studies, this paper develops a pose measurement system with total station, and the coopera-tive target sensor, which is mainly composed of a pinhole prism, an industrial lens, a camera and a biaxial inclinometer, is referred to as pose measurement target sensor (PMTS).

Y Gao *et al*

**Figure 2. **The measurement light path and imaging principle of PMTS. (a) Light path. (b) Image in <A > place. (c) Image in <B > place. (d) Final image in <C > place.

PMTS is a single-point 6-DOF measurement target, and is designed to be very small and light for convenient use. A pre-cise mathematical model is established from the original data observed by the total station, the camera and the inclinometer to the final six pose variables. And calibration methods are presented to determine the unknown parameters in the model. With the help of total station, the precise model and calibra-tion we proposed guarantee this system performs accurately in long-range measurement.

This paper is organized as follows. In section 2, the con-figuration of the target sensor PMTS is described, and the mathematical model for 6-DOF measurement is established in section 3. Section 4 introduces calibration methods of dif-ferent levels. Then, several simulations and experimental vali-dations are described in section 5. Finally, concluding remarks and a brief overview of further work are presented.

**2****. Target sensor description**

*2**.1. Sensor configuration*

The target sensor PMTS is designed light and handy with 150mm ´ 90mm ´ 90mm in size and 1.75kg in weight. Its structure inside the aluminum casing is shown in figure 1. The pinhole prism is processed from a traditional cube-corner prism, and its vertex is cut in a small part to form a light channel. An industrial lens with optical filter, a camera, an inclinometer and processing circuits are integrated on a steel

2

Meas. Sci. Technol. **27 **(2016) 125103 Y Gao *et al*

**Figure 3. **Definitions of all frames and their relationships in 6-DOF measurement.

**Table 1. **Naming rule of the variables in this paper.

Variable

*R*

*p*

a *T**p*

*s**R*

Description

3 ´ 3 unit rotation matrix

Point coordinate

Angle

The coordinate value is in frame *T*

The transformation is from frame *S *to frame *T*

Naming rule

A matrix with dimension of *n *´ *n *(*n *> 1) is written using uppercase italic and bold

A matrix with dimension of *n *´ 1 (*n *> 1) is written using lowercase italic and bold

A single value is written using lowercase and italic

The mark on the upper left corner of a matrix represents its coordinate system

The marks both on the upper left and lower left corner of a matrix

represent a transformation between two coordinate systems

support behind the pinhole prism. Outside the casing, the DC power supply and Ethernet communication with an upper computer are both through the cable. The upper computer on the one hand controls the total station and transfers its mea-surement data to PMTS, on the other hand, receives the result processed by the CPU of PMTS, displaying it and using it for subsequent applications.

*2.2. Light path and imaging*

During practical applications, the measurement beam (can be considered as parallel light) from a total station is incident on the pinhole prism. The light path diagram is illustrated in figure 2. For PMTS, most of the incident light is reflected back by the prism which acts as a 3D cooperative target as usual. Meanwhile, the other part passes through the pinhole and is recorded by the subsequent lens and camera. Since the pin-hole of the cube-corner prism is triangular, the image behind it is also triangular and diffraction phenomena exists (figure 2(b)). By the effect of a circular aperture which clings to the pinhole, the image behind it is round (figure 2(c)). Finally, the converging lens which focuses at infinity force the spot on the imaging plane concentrated (figure 2(d)), and the center of gravity method [18] is used to find a sub-pixel centroid of the spot. In addition, the optical filter added in the light path makes sure that lights only in wave band of the measure-ment laser can reach to the camera, and tiny stray lights are easily eliminated according to their sizes. Although in most cases, good imaging is achievable, active light sources like spotlights which emit strong light in a large spectrum maybe disturb the main spot, so the use of PMTS should avoid these light sources.

**3****. Mathematical modeling**

*3**.1. Definition of coordinate systems*

A total of three different coordinate systems are defined in the 6-DOF measurement: the total station frame (*O**T**X**T**Y**T**Z**T*-coordinate, frame *T*) at the base, the sensor frame (*O**S**X**S**Y**S**Z**S*-coordinate, frame *S*) and the camera frame (*O**C**X**C**Y**C**Z**C*-coordinate, frame *C*) in the sensor, as shown in figure 3. A total station is a spherical coordinates measuring system, origin *O**T** *is the starting point of the ranging measure-ment, *Z**T*-axis is vertical, and *X**T*-axis and *Y**T*-axis are respec-tively defined as the north and the east in geodesy. Since total station must be leveled during measurement, *X**T**-Y**T *plane is parallel to the ground horizontal plane. We define the reflec-tion center of the prism on PMTS as the origin of frame *S*, the directions of the inclinometer’s two axes as the directions of *X**S*-axis and *Y**S*-axis, and *Z**S*-axis is normal to the *X**S**-Y**S *plane.

In frame *C*, taking all the components in the light path including the prism into account, the reflection center of the prism where each incident rays intersect is also defined as the origin *O**C*. And *X**C*-axis and *Y**C*-axis are defined by the camera as traditional definition. As frame *T *is a left-hand coordinate system (inherent definition of a total station), all the coordi-nate systems in this paper are defined as left-hand system.

In order for an easily understanding, we list the naming rule of the variables in this paper in table 1.

*3**.2. Imaging model*

Different from a traditional imaging system which maps the 3D points in a world coordinate system to 2D image points,

3

Meas. Sci. Technol. **27 **(2016) 125103 Y Gao *et al*

the imaging system in PMTS instead maps laser beams. This imaging system conforms to the pinhole model with lens distortion, and the reflection center of the prism is the optical center. As shown in the camera frame definition part of figure 3, Let unit vector *C*** v **denotes the vector of the laser beam in frame

*C*, (

*u*¢,

*v*¢) denote the real (distorted) image pixel coordinates, and (

*u*,

*v*) denote ideal image pixel coordinates, according to the pinhole imaging model, the four-parameter model of a camera [9] is given by

*u* *a**x*

0

0 *a**y*

0

*u*0*C**v*(*x*)/*C**v*(*z*) *a**x*

0 *v*(*y*)/ *v*(*z*)

1 1 0

0 *a**y*

0

*u*0*x*

0

1

(1)

where (*u*0, *v*0) denote the image coordinates of the camera’s principal point, (*x*, *y*) are ideal normalized image coordinates, *a**x *and *a**y *are scale factors. Then optical lens distortion is added to the ideal coordinates in order to obtain a precise model, and in this paper, we only consider the first two terms of radial distortion [19]. Let (*x*¢, *y*¢) be the real (distorted) normalized image coordinates. We have

*x*¢ =*x** *+ *x*(*k *(*x*2 + *y*2) + *k*2(*x*2 + *y*2)2)

*y*¢ =*y** *+ *y*(*k** *(*x*2 + *y*2) + *k*2(*x*2 + *y*2)2) (2)

where *k*1 and *k*2 are the coefficients of the radial distortion. From *u*¢ = *u*0 + *a**x**x*¢ and *v*¢ = *v*0 + *a**y**y*¢, we have

*u*¢=*u** *+ (*u** *− *u*0)[*k *(*x*2 + *y*2) + *k*2(*x*2 + *y*2)2]

*v*¢=*v** *+ (*v *− *v*0)[*k *(*x*2 + *y*2) + *k*2(*x*2 + *y*2)2] (3)

The imaging model has been established from laser beam vectors to the corresponding centroid of the spots. And in reverse, the laser beam vector *C*** v **can also be obtained from the corresponding centroid of the real spot (

*u*¢,

*v*¢) according to equations (1)–(3) with all the camera parameters calibrated.

*3.3. Measurement model*

The underlying principle of the 6-DOF pose measurement is to solve for six unknowns (three positions and three orienta-tions) in the rotation matrix *T*** R **and the translation vector

*T*

**. With the spatial transformation, a point**

*t**S*

*p***in the coordinate frame**

*S*could be mapped as the same point

*T*

*p***in the coordi-nate frame**

*T*as follow:

*T**p*** **=

*T*

**×**

*R**S*

*p***+**

*T*

**(4)**

*t*Note that orientations are represented in *X-Y-Z *fixed angles, known as roll–pitch–yaw (g, b, a) with respect to the refer-

ence frame, and they are illustrated in figure 3, so *S*** R **can be expressed as follow:

cosb 0 sinb 1 0 0

*T*** R **=sina cosa 0 0 1 0 0 cosg −sing (5)

0 0 1−sinb 0 cosb0 sing cosg

The translation vector *S*** t **is directly observed from the total station measurement of the prism on PMTS. The calculation of roll and pitch angel is based on the assumption that the

**Figure 4. **System layout and frame definition in the calibration of the imaging system. (Actually it is a three-axis rotary table, but only two internal axes are used for calibration).

horizontal plane on PMTS and on total station is the same. Since earth curvature effect on a distance of 100 m is just about 3², which is far less than the measurement uncertainty of the inclinometer (0.005°), we consider the horizontal plane as the same. Then roll angle g and pitch angle b are obtained from the inclinometer through simple geometrical calculations according to the coordinate system definition of frame *S*. In detail, as shown in the sensor frame definition part of figure 3, observed data (h, q) indicate the inclined angles between the axes of the inclinometer and the horizontal plane which is parallel to *X**T**-Y**T** *plane, then b and g are calcu-lated using (6), and the corresponding proof is presented in appendix A.

b = h

g = arcsin sinq (6)

The last degree of freedom is the yaw angle a. As the origin of frame *C *and frame *S *is defined the same, the transforma-tion between these two coordinate systems just needs three orientations, so we assume the rotation matrix from frame *C *to frame *S *as *S*** R**. The unit vector of the measurement beam in Frame

*T*is the normalization of translation vector

*T*

**, that is**

*t**T*** v **=

*S*

**/**

*t**S*

**, then the transformation relationship between**

*t**C*

**and**

*v**T*

**is described as the following**

*v**T*** R **×

*S*

**×**

*R**C*

**=**

*v**T*

**(7)**

*v*Substituting for *S*** R **by (5) in (7), simplifying and results in:

4

Meas. Sci. Technol. **27 **(2016) 125103 Y Gao *et al*

cosa −sina sina cosa

0 0

where

0*p*(*x*) *T**v*(*x*)

0 *p**p*((*y**z*)) = *T*** v**=

*T*

*T*

*v*

*v*((

*y*

*z*)) (8)

*p*(*x*) cosb 0 *p*(*y*) = 0 1 *p*(*z*) −sinb 0

sinb 0 cosb0

0 cosg

sing

−sing × *S*** R **×

*C*

**cosg**

*v*(9)

Only the first two rows of equation (8) include the unknown yaw angle a, after the roll and pitch angle are calculated by (6), yaw angle a is derived by

*p*(*x*) ×*T**v*(*y*) − *p*(*y*) × *T**v*(*x*) *p*(*x*) × *T**v*(*x*) + *p*(*y*) × *T**v*(*y*)

(10)

and the proof of (10) is presented in appendix B.

Note that there are two aspects in the whole model should be calibrated in order to achieve measurement: these are the unknown intrinsic parameters (*a**x*,*a**y*,*u*0,*v*0,*k *,*k*2) of the

imaging model, and the transformation matrix *C*** R **between Frame

*C*and Frame

*S*. The calibration methods for these two levels are expounded in section 4.

**4. Calibration methods**

*4.1. Calibration of the imaging system*

Camera calibration is a necessary step in 3D computer vision and much work has been done [19–21]. But different to a tra-ditional imaging system which maps 3D points in the world coordinate system to 2D image points, the vision system in PMTS instead maps laser beam vectors. An accurate known control field is always used to act as calibration reference: for a traditional imaging system, it is 3D point field, but for PMTS, it is space laser beam field. In order to construct inci-dent laser beams from different positions, we introduce a two-axis rotary table to rotate with PMTS in two dimensions in front of a stationary total station. As shown in figure 4, the rotary table has three main components: the fixed base, the external frame which rotates around the vertical-rotation axis with respect to the fixed base, and the internal frame which rotates around the horizontal-rotation axis with respect to the external frame. PMTS is mounted on the internal frame of the rotary table with the prism pointing towards the fixed total sta-tion. The reflection center of the prism should be adjusted to the rotary center of the rotary table where the vertical-rotation axis and the horizontal-rotation axis meet together in order to make sure that during rotating, the laser beam vector from the total station to the prism remains stationary.

The rotary table coordinate frame (*O**R**X**R**Y**R**Z**R*-coordinate, frame *R*) is defined fixed on the internal frame, its origin is on the rotary center, and the *X**R*-axis and *Y**R*-axis are the same as the vertical-rotation and horizontal-rotation axes in the default position (figure 4). At this position the default rotation is at vertical angle y = 0° and horizontal angle f = 0°. And at this position, take the unit vector of the laser beam from

**Figure 5. **Principle of calibration between the coordinate systems of the camera and the inclinometer.

the total station in frame *R *as *R*** v**0. In a situation where the internal frame and PMTS is rotated by vertical angle y(

*i*) and horizontal angle f(

*i*) (

*i*th place), the relative attitude of frame

*R*with respect to the fixed base and the stationary laser beam has changed, and the laser beam vector in frame

*R*has been transformed as:

*R *(*i*) = *R*** R**(

*i*)×

*R*

**0 (11)**

*v*where the corresponding 2D transformation matrix *R*** R**(

*i*) is expressed as follow:

cos (*i*) 0 −sin (*i*) 1 0 0

*R*** R**(

*i*) = 0 1 0 0 cos (

*i*) sin (

*i*) (12) sin (

*i*) 0 cos (

*i*) 0 −siny(

*i*) cosy(

*i*)

Since the origin of frame *R *and frame *C *is the same, the transformation between these two coordinate systems just needs three orientations, so we assume the rotation matrix

from frame *R *to frame *C *as *R*** R**, then the laser beam vector in frame

*C*is expressed:

*C**v**i*) =*C**R*** **×

*R*

*v**i*) =

*C*

*R***×**

*R*

**(**

*R**i*) ×

*R*

**0 (13)**

*v*Equation (13) has established the exact relationship between the stationary laser beam vector and the camera coordinate system via the 2D rotary transformation of the rotary table. And assume (*u*¢(*i*),*v*¢(*i*)) as the projection of beam vector *C**v**i*) in the image plane according to equations (1)–(3) of the imaging model in section 3, this relationship has been extended to the image pixel coordinate system.

For *n *different rotation positions of the rotary table during calibration, assume that the image pixel coordinate of detected laser spot in *i*th place is given by (*u*(*i*),*v*(*i*)). All the unknowns in this method (*R*** v**0,

*R*

**,**

*R**a*

*x*,

*a*

*y*,

*u*0,

*v*0,

*k*,

*k*2) can be obtained by minimizing the following function:

5

Meas. Sci. Technol. **27 **(2016) 125103

** J**(

*R*

**0,**

*v**C*

**,**

*R**a*

*x*,

*a*

*y*,

*u*0,

*v*0,

*k*,

*k*2)

= *n** *[(*u*(*i*) − *u*(*i*))2+ (*v*(*i*) − *v*(*i*))2]+ *M*( *R*** v**0 − 1)2 (14)

*i*=1

where *M *is a penalty factor which is set by a large value.

The rotation *R*** R **can be parameterized by a vector of three Euler angles according to equation (5) in order for easy calculation. Minimizing (14) is a nonlinear minimization problem, which can be solved using optimization tech-niques such as Levenberg–Marquardt algorithm [22]. Note that reasonable initial values should be given for a global optimal solution.

*4.2. Calibration of the rotation between the camera and the inclinometer*

As described in section 3, the aim of calibrating the transfor-mation relationship between coordinate systems of the camera and the inclinometer is to determine the rotation matrix *S*** R **in the measurement model. In this paper, based on the char-acteristics of the 6-DOF measurement system, we present a method utilizing a prism as a reference target fixed on a rigid body as well as PMTS to calibrate the unknown relationship between these two coordinate systems.

The calibration layout and process is illustrated in figure 5. During calibration, several positions and orientations of the rigid body with respect to the stationary total station are placed. Let *S*** p**denote the unknown coordinate of the refer-ence target in frame

*S*. Since the reference target and PMTS are fixed on a rigid body, the value of

*S*

*p***will not vary as the rigid body moves. At**

*i*th place, the coordinate of the reference target in frame

*T*is directly measured by the total station as

*T*

**r(**

*p**i*), and the transformation relationship between these two values is then:

*T*** p**(

*i*) =

*T*

**(**

*R**i*)×

*S*

*p***+**

*T*

**(**

*t**i*) (15)

where *T*** R**(

*i*) and

*T*

**(**

*t**i*) are the rotation and translation matrix relate frame

*S*to frame

*T*, and

*T*

**(**

*R**i*) can be parameterized by Euler angles (g(

*i*), b(

*i*), a(

*i*)) as defined in equation (5).

Substituting for *S*** R**(

*i*) by the Euler angles in (15), and re-writing, results in:

cosa(*i*) −sina(*i*) 0 cosb(*i*) sinb(*i*) sin (*i*) sinb(*i*) cos (*i*)

sina(*i*) cosa(*i*) 0 0 cos (*i*) −sin (*i*) *S** *r

0 0 1−sinb(*i*) cosb(*i*)sing*i*) cosb(*i*)cosg*i*) *T**p*r(*i*)(*x*) − *T**t*(*i*)(*x*)

= *T*** p**r(

*i*) −

*T*

**(**

*t**i*) =

*T*

*p*r(

*i*)(

*y*) −

*T*

*t*(

*i*)(

*y*) (16)

*T*

*p*r(

*i*)(

*z*) −

*T*

*t*(

*i*)(

*z*)

According to the measurement model in section 3, pitch angle b(*i*) and roll angle g(*i*) are directly obtained from the inclinom-

eter, and *S*** t**(

*i*) from the total station measurement. Therefore only

*S*

*p***and yaw angle a(**

*i*) are unknowns in equation (16). Note that the third row of (16) does not include a(

*i*), which is picked as:

Y Gao *et al*

−sinb(*i*) cosb(*i*)sin (*i*) cosb(*i*)cos (*i*)*S**p*

= *T**p*r(*i*)(*z*) − *T**t*(*i*)(*z*) (17)

Given *n *positions and orientations, we can stack all equations together to obtain in total *n *equations:

−sinb(1) cosb(1)sing1) cosb(1) cosg1) −sinb(2) cosb(2) sing2) cosb(2) cosg2) *S*

M M M −sinb(*n*) cosb(*n*) sing*n*) cosb(*n*) cosg*n*)

*T**p*r(1)(*z*) − *T**t*(1)(*z*)

=*T** **p*r(2)(*z*) − *T**t*(2)(*z*) (18)

M

*T** **p*r(*n*)(*z*) − *T**t*(*n*)(*z*)

Equation (18) can be simplified in matrix form as *D*** **×

*S*

*p***=**

**. If**

*d**n*⩾ 3, the solution is given by means of least-square:

*S**p*** **= (

*D**T*

**) 1**

*D*

*D**T*

**(19)**

*d*Then take the first two rows of (16), which can be rewritten as:

(20) |

cosa(*i*) −sina(*i*) *p** ** **T**p** *(*i*)(*x*) − *T**t*(*i*)(*x*) sina(*i*) cosa(*i*) *p**y** **T**p** *(*i*)(*y*) − *T**t*(*i*)(*y*)

where

(21) |

*p** * cosb(*i*) sinb(*i*)sing*i*) sinb(*i*)cosg*i*) *p**y* 0 cos (*i*) −sin (*i*) r

Equation (20) has the same form as described in appendix B. And the yaw angle a(*i*) is solved after substituting the calculated value *S**p*** **in (21). Therefore the transforma-

tion matrix *S*** R**(

*i*) is obtained according to the definition of equation (5).

Now the objective rotation matrix *C**R*** **to be calculated is finally introduced according to (7), which is rewritten in a new form:

*S*** R **×

*C*

*v**i*) =(

*T*

**(**

*R**i*))−1×

*T*

*v**i*) (22)

where *C**v**i*) and *T**v**i*) are the same beam vector in frame *C *and frame *T *respectively at *i*th place. By stacking *n *such equa-tions as (22) together, we have

*S**R**C** *(1) *C** *(2) L *C** *(*n*)

=(*T*** R**(

*i*))−1×

*T*

*(1) (*

*T*

**(**

*R**i*))−1×

*T*

*(2) L (*

*T*

**(**

*R**i*))−1×

*T*

*(*

*n*) (23)

Equation (23) can be simplified to a matrix equation in form of *S**R*** A **=

**, where**

*B***and**

*A***are both 3 ´**

*B**n*matrix. If

*n*⩾ 3,

*C*

**is solved by singular value decomposition (SVD):**

*R**C*** R **=

*VU**T*(24)

where ** V **and

**are right and left singular matrix of**

*U*

*AB***T**.

6

Meas. Sci. Technol. **27 **(2016) 125103 Y Gao *et al*

**Table 2. **Camera calibration results.

Parameters Ideal value 1 2 3 4 5 Mean STD

*a**x* 4716.98 *a**y* 4716.98 *u*0 640

*v*0 512 *k*1 0 *k*2 0

RMS

4739.61 4739.52 625.80 510.27 0.039 −0.928

0.24

4739.34 4739.48 630.14 511.17 0.045 −1.213

0.25

4740.01 4740.52 627.27 512.76 0.014 −0.445

0.25

4738.73 4739.50 629.84 507.94 0.043 −1.077

0.24

4739.37 4739.85 627.85 511.23 0.052 −1.867

0.22

4739.41 0.4682 4739.77 0.4434 628.18 1.8174 510.67 1.7704 0.039 0.014 −1.106 0.515

0.24 0.014

**Figure 6. **Simulated uncertainty of yaw angle by the effect of inclinometer.

**Figure 7. **Simulated uncertainty of yaw angle by the effect of laser spot.

The above solution is obtained by linear methods step-by-step and it is not fully optimal. We can refine it by minimizing the following function:

** J**(

*S*

**,**

*R**S*

*p***)=å (**

*T*

**(**

*R**i*)

*S*

*p***+**

*T*

**(**

*t**i*)) −

*T*

*p***(**

*i*) 2. (25)

*i*=1

where (*S*** R**,

*S*

*p***) are unknown parameters, and**

*S*

**can be parameterized by a vector of three Euler angles according to equation (5) in order for easy calculation. This non-linear minimization problem is solved by the Levenberg– Marquardt algorithm. The linear solution can be used as the initial iteration values in the nonlinear optimization procedure.**

*R***5. Simulations and experiments**

The camera we use in PMTS has 1280 ´ 1024 pixels with each of them 5.3 µm ´ 5.3 µm in size, and the industrial lens has a 25mm focal length, which determine the field of view about 15° ´ 12°. The inclinometer has a measure-ment range of ±15° in both axes, and its measurement uncertainty is 0.005°. After the camera and the inclinometer are installed in PMTS, *X**S*-axis is nearly parallel to *Z**C*-axis, and *Y**S*-axis is nearly parallel to *X**C*-axis. A Leica TS15 total station with angle measurement uncertainty of 1², distance measurement uncertainty of 1mm + 1.5ppm and 3D coor-dinate resolution of 0.1mm is used with PMTS for these experiments.

**Figure 8. **Calibration and verification using a reference target and a rigid body.

*5**.1. Measurement error analysis*

According to the measurement model, the 3D translation acc-uracy of PMTS depends entirely on the measurement accuracy of the cooperative total station, and the accuracy of pitch and roll angle entirely depends on the measurement accuracy of the biaxial inclinometer in PMTS. Whereas yaw angle error is due to the measurement error of both inclinometer and imaging system. Therefore, we mainly analyze the effect of these factors on the yaw angle accuracy. Since the calculation model of yaw angle is complicated, we have conducted Monte Carlo simulations to statistically evaluate the accuracy of the yaw angle.

7

Meas. Sci. Technol. **27 **(2016) 125103 Y Gao *et al*

**Table 3. **Calibration results of the rotation between the camera and the inclinometer.

Ideal Parameters value

q*x* −90° q*y** *0° q*z* −90°

1

−89.7175° 0.1046°

−90.3952°

2

−89.7104° 0.1005°

−90.3911°

3

−89.7214° 0.1063°

−90.3869°

4

−89.7079° 0.0928°

−90.3926°

5

−89.7042° 0.0924°

−90.3925°

Mean

−89.7123° 0.0993°

−90.3917°

STD

0.0070° 0.0065°

0.0030°

Throughout simulations, the translation is set at

*S*** t **= [15

*m*,0

*m*,0

*m*]

**, the yaw angle is set at a = 0°, the values of pitch and roll are set according to the demand**

*T*of each simulation, and all the unknown parameters in the whole model which should be calibrated are set as ideal values. For the parameters of the imaging model, the scale factors are obtained from dividing the designed focal length by the pixel size, the principal point values are set to half of the pixel numbers, all distortion coefficients are set to zero, and these values are all listed in the second column

of table 2. For rotation *C*** R**, in order for simply expressing, we parameterize it by three Euler angles (q

*x*, q

*y*, q

*z*) using the same order as equation (5) described, and their ideal

values listed in the second column of table 2 according to the installation relationship between the camera and the inclinometer.

(1) *Effects of Inclinometer Error: *In this simulation, the measurement uncertainty of inclinometer is 0.005°. We change the pitch and roll angles both from −15° to 15° with a 1° step, and ran 1000 simulations in each position. The uncertainty of yaw angle is illustrated in figure 6. The result has shown that within the scope of the incli-nometer’s range, the change of roll angle has little effect on yaw angle uncertainty, but the increasing of pitch angle enlarges it. The uncertainty of yaw angle is nearly 0° when pitch angle is 0°, and enlarges to 0.0014° when pitch angle is 15°.

(2) *Effects of Imaging System Accuracy: *In this simulation, pitch and roll angle are both set as 0°. We add zero-mean Gaussian noise onto the image spot center, with the noise uncertainty vary from 0.1 to 1 pixel, and ran 1000 simula-tions in each position. It is observed that the uncertainty of yaw angle is almost proportional to the positioning uncertainty of the laser spot, and a spot uncertainty of 0.5 pixel corresponds to the yaw angle uncertainty of 0.006°, as seen in figure 7.

*5.2. Camera calibration experiment*

The rotary table used for calibration is manufactured with a high position precision of 3 seconds of arc for all the frames. At the beginning of the calibration, the relative position of the total station and the rotary table should be controlled to make the laser shoot on the center of the camera imaging plane roughly. Then from this beginning position, horizontal axis is rotated from −7° to 7°, and vertical axis from −6° to 6° respectively both with a 1° step, ensuring the laser spots almost visit the whole plane of the image sensor within the

**Figure 9. **The 3D coordinates errors of the reference target in verification experiment.

field of view, and resulting in 15 ´ 13 = 195 groups of cali-bration data.

We conducted this calibration five times during which the relative position between the total station and the rotary table has changed. The initial values of the parameters in the camera model are given by ideal values which are listed in the

second column of table 2, and those of *R*** v**0 and

*R*

**are given according to the real mounting position. The result values after optimization are also listed in table 2, where the last two columns display the mean and standard deviation (STD) of the five sets of results, and last row displays the root of mean squared (RMS) distances, in pixels, between detected image points and projected ones.**

*R*The standard deviations for all parameters are quite small, implying that the proposed method is quite stable. The mean RMS residual distance error of the laser spot center is 0.24 pixel, which corresponds to equivalent RMS angle error of 0.003° according to the simulation in figure 7. These results have verified the feasibility and effectiveness of the calibra-tion approach.

*5**.3. Calibration of the rotation between the camera and the inclinometer*

We used a square steel tube about 1.6 m long as the rigid body for calibration and verification, as shown in figure 8. A PMTS and a reference prism are fixed on the two ends of the steel tube respectively. A total of 125 different positions and ori-entations were conducted for calibration. We also made five sets of this calibration experiments. We also use three Euler angles (q*x*, q*y*, q*z*) as described in section 5.1 to express the rotation. The result values are listed in table 3, where the last two columns display the mean and standard deviation of the five sets of results. The standard deviations for all parameters

8

Meas. Sci. Technol. **27 **(2016) 125103

**Figure 10. **Evaluation experiment for measurement accuracy of yaw angle.

are quite small, which implies that the proposed algorithm is quite stable.

*5.4. Evaluation of the whole 6-DOF measurement accuracy and calibration performance*

The accuracy evaluation for 6-DOF measurement is a chal-lenging problem, partly because the measuring unit (PMTS) and the measuring base (total station) are separated, and partly because the relationships among these six parameters are orthogonal and coupled. The method utilizing a refer-ence target and PMTS fixed on a rigid body as described in section 4.2 of section 4 is not only employed for cali-bration, but also for evaluating the whole 6-DOF measure-ment accuracy of the system. Their difference lies in that the calibration is the inverse process of the evaluation. After calibration, as the coordinate value of the reference target in frame *S *has been obtained, its value in total station coordinate system can be calculated by 6-DOF measuring according to equation (15). Compared with the coordinate value directly observed from the total station, the deviation distances reflect the whole accuracy of the system. This method is maybe the most reliable method firstly because the measuring base is the same total station, and then during testing, all the six parameters have joined in it and need not be decomposed.

A total of 10 different positions and orientations were con-ducted for evaluation. We use the measurement model with its parameters set by the mean values in tables 2 and 3 to process the experiment data. And the experiment data was also pro-cessed by the model using ideal values for comparison. We named the camera calibration as Calibration I, and the cali-bration of the rotation *S*** R **as calibration II. The deviations in 10 positions and orientations of the verification experiment are plotted in figure 9, showing that the proposed method has achieved excellent performance in accuracy. By using the pro-posed method, the mean deviation of these 10 sets of results

Y Gao *et al*

**Figure 11. **Yaw angle measurement errors of experiment carried at distances of 8 m, 51 m, and 108 m under different pitch angles.

is 0.5mm. As a comparison, the mean deviation processed by the model without calibration II is 10.2mm, and without cali-bration I and II is 14.2mm.

This experiment on the one hand demonstrates the neces-sity and validity of the calibration approach in order for an excellent measurement accuracy, on the other hand evaluate the whole 6-DOF measurement accuracy: for a reference point in the rigid body which is 1.6 m apart from the target sensor PMTS, the mean 6-DOF measurement accuracy of the system can reach to 0.5mm.

*5**.5. Relative yaw angle evaluation*

According to the measurement principle in section 3, a rotation of PMTS within the horizontal plane does not affect the value of pitch and roll, so the yaw angle can be individually evaluated. We introduced a horizontal multi-tooth dividing table with an accuracy of less than 1s of arc to evaluate the yaw angle accuracy. The PMTS with a 6D cloud platform to change and adjust its posture is fixed on the multi-tooth dividing table as illustrated in figure 10. The multi-tooth dividing table has been leveled therefore no matter how the posture of the PMTS change, the mea-surement pitch and roll angle of PMTS would not change during the table rotating, and its rotation angle is the eval-uation criteria of the relative measurement yaw angle of PMTS by directly comparison.

In the experiments, the distance between the PMTS and the total station is set to be about 8 m, 51 m, and 108 m respec-tively, and the pitch angle of PMTS is set to be about 0°, 5° and 10° in each distance. At each pitch of each distance, 14 sets of comparison data are acquired. The RMS errors of yaw angle are summarized in figure 11, showing that the increase of the pitch angle enlarges the errors of the measurement yaw angle which has verified the simulation results, and the standard deviation of all the results is up to 0.0045°. It is inter-esting to note that yaw angle errors decrease a little with the increase of measurement distance. It is mainly because the light spot on the image plane is more irregular in the edge when the total station near PMTS and its position locating accuracy decreases.

9

Meas. Sci. Technol. **27 **(2016) 125103

**6. Conclusions**

In this paper, a 6-DOF measurement system with a total sta-tion has been developed by taking advantages of total station about long range measurement, relatively high accuracy, etc. The system configuration of the 6-DOF target sensor has been described, which is mainly composed of a pinhole prism, an industrial lens, a camera and a biaxial inclinometer. The imaging property of the light path has been illustrated. The mathematical model of solving six degrees of freedom has been expounded in detail. In order to calibrate this model, this paper has also proposed approaches of camera calibra-tion and the calibration between the coordinate systems of the sensor and the camera. Repetitive experiments of two levels have both verified the feasibilities and stabilities of these approaches, and evaluation experiment with a reference target on a rigid body has demonstrated the necessity and validity of the calibration approach in order to ensure excellent mea-surement accuracy. At the same time, the evaluation experi-ment has shown that the 6-DOF measurement accuracy of the whole system is up to 0.5mm for a reference point which is 1.6 m apart from the target sensor. In addition, the accuracy of the yaw angle has been analyzed by Monte Carlo simula-tions, and evaluated in field experiments, where the measure-ment distance is up to 100 m. The experiments’ results reveal that the RMS error of relative yaw angle is up to 0.0045°, and both the simulations and the experiments have verified that the increasing of pitch angle enlarges the measuring error of yaw angle.

Although having achieved high accuracy in static condi-tion, the dynamic performance will be analyzed and validated in the future work. In addition, the imaging system has the ability of measuring two angles, but in this paper, we haven’t studied the calculation of pitch angle from the camera infor-mation. By considering the double pitch estimation from both the camera and the inclinometer, we may increase the acc-uracy of the system.

**Ac****knowledgments**

This work was funded by the National Natural Science Foun-dation of China (Grant No. 51225505, 51305297, 51405338) and the Natural Science Foundation of Tianjin (Grant No. 15JCQNJC04600).

Y Gao *et al*

**Figure A1. **Geometrical principle of roll angle calculation.

and *OY*¢ intersect at *B*, so we have *O**B*^*YD*. Hence, along with *O**B*^*Y**C*, line *OB *is perpendicular to plane *YBC*, and we have *O**B*^*Y**B *and *O**B*^*BC*. Since D*O**X**A*@D*CY**D*, we have Ð*Y**CB *= h. And for right triangle D*Y**BC** *and D*Y**BD*, we have Ð*BYD *= h, so cosh = *YD*/*YB*. For D*O**Y**B** *and D*OYD*, we have sing = *Y**B*/*OY *and sinq = *YD*/*OY*. Therefore, we have sing × cosh = sinq, and finally:

g = arcsin sinq (A.1)

**Appendix B. Solution of yaw angle**

In this paper, all the yaw angle calculations are in the same form as follow:

(B.1) |

cosa −sina *x** **q**x** *sina cosa *y** **y*

The geometric meaning of equation (B.1) is that 2D vector [*q**x **q**y*]*T** *is rotated from [*p* *p**y*]*T** *by angle a, and we have:

(B.2) |

cosa −sina *x** **x** **q**x** *sina cosa *y** **y** **y*

Denote that *N*=[*q**x** **q**y*]*T** *, equation (B.1) is transformed as:

(B.3) |

1 *p** *−*p**y** *cosa cosa 1 *q**x*

*N **p**y* *x ** *sina *p** *sina *N **q**y*

The 2 ´ 2 matrix *R**p** *is a unit orthogonal matrix, so the result of [cosa sina]*T** *is

**Appendix A. Solution of roll angle**

cosa *T** *1 *q**x* 1 *x *sin *N ** **y** **N*2 *y*

*p**y** **q**x** **x** **q**y*

(B.4)

As shown in figure A1, the coordinate system *O-XYZ *is rotated from *O-X*¢*Y*¢*Z*¢ by pitch angle b and roll angle g, and *OX*¢, *OY*¢ are both on the horizontal plane. We name the horizontal plane, plane *H*. According to the definition of roll angle in this paper, line *OY *is obtained from *OY*¢ by rotating angle g around *OX *axis, so we have *O**X*^*OY** *and *O**X*^*OY*¢. Line *YC *is parallel to *OX*, and point *C *is the intersection with plane *H*, so *YC *is perpendicular to plane *OYB *and therefore *Y**C*^*Y**B*. Line *YD *is perpendicular to plane *H*, and line *CD*

Therefore, we have

sina *p** *× *q**y *− *p**y *× *q**x *cosa *p** *× *q**x *+ *p**y *× *q**y*

Finally we conclude that

a = arctan *p** *× *q**y *− *p**y *× *q**x*

(B.5)

(B.6)

10

Meas. Sci. Technol. **27 **(2016) 125103

**R****eferences**

[1] Muelaner J E, Cai B and Maropoulos P G 2010 Large-volume metrology instrument selection and measurability analysis *Proc. Inst. Mech. Eng. *B **224 **853–68

[2] Franceschini F, Galetto M, Maisano D and Mastrogiacomo L 2014 Large-scale dimensional metrology (LSDM): from tapes and theodolites to multi-sensor systems *Int. J. Prec. Eng. Manuf. ***15 **1739–58

[3] Rao C K, Mathur P, Pathak S, Sundaram S, Badagandi R R and Govinda K V 2013 A novel approach of correlating optical axes of spacecraft to the RF axis of test facility using close range photogrammetry *J. Opt. ***42 **51–63

[4] Shen X S, Lu M and Chen W 2011 Tunnel-boring machine positioning during microtunneling operations through integrating automated data collection with real-time computing *J. Constr. Eng. Manage. ASCE ***137 **72–85

[5] Chen S Y, Zhang J W, Zhang H X, Kwok N M and Li Y F 2012 Intelligent lighting control for vision-based robotic manipulation *IEEE Trans. Ind. Electron. ***59 **3254–63

[6] Xie W-F, Li Z, Tu X-W and Perron C 2009 Switching control of image-based visual servoing with laser pointer in robotic manufacturing systems *IEEE Trans. Ind. Electron. ***56 **520–9

[7] Zheng Y, Kuang Y, Sugimoto S, Astrom K and Okutomi M 2013 Revisiting the PnP problem: a fast, general and optimal solution *IEEE Int. Conf. on Computer Vision (ICCV) *pp 2344–51

[8] Luhmann T 2009 Precision potential of photogrammetric 6DOF pose estimation with a single camera *ISPRS J. Photogramm. Remote Sens. ***64 **275–84

[9] Xu D, Han L, Tan M and Li Y F 2009 Ceiling-based visual positioning for an indoor mobile robot with monocular vision *IEEE Trans. Ind. Electron. ***56 **1617–28

[10] Liu Z, Zhu J, Yang L, Liu H, Wu J and Xue B 2013 A single-station multi-tasking 3D coordinate measurement method

Y Gao *et al*

for large-scale metrology based on rotary-laser scanning *Meas. Sci. Technol. ***24 **105004

[11] Muralikrishnan B, Phillips S and Sawyer D 2013 Laser trackers for large-scale dimensional metrology: a review *Precis. Eng. ***44 **13–28

[12] Kyle S 2005 Alternatives in 6DOF probing–more flexibility, lower cost, universal *Coordinate Measurement Systems Conf. (Austin, TX)*

[13] Kim Y K *et al *2013 Developing accurate long-distance 6-DOF motion detection with one-dimensional laser sensors:

three-beam detection system *IEEE Trans. Ind. Electron. ***60 **3386–95

[14] Gugg C, O’Leary P and Harker M 2013 Large scale optical position sensitive detector *2013 IEEE Int. Instrumentation and Measurement Technology Conf. (I2MTC) *pp 1775–80

[15] Li Y H, Qiu Y R, Chen Y X and Guan K S 2014 A novel orientation and position measuring system for large & medium scale precision assembly *Opt. Lasers Eng. ***62 **31–7

[16] Jeon H, Bang Y and Myung H 2011 A paired visual servoing system for 6-DOF displacement measurement of structures *Smart Mater. Struct. ***20 **16

[17] Myung H, Lee S and Lee B 2011 Paired structured light for structural health monitoring robot system *Struct. Health Monit. Int. J. ***10 **49–64

[18] Rufino G and Accardo D 2003 Enhancement of the centroiding algorithm for star tracker measure refinement *Acta Astronaut. ***53 **135–47

[19] Zhang Z 2000 A flexible new technique for camera calibration *IEEE Trans. Pattern Anal. Mach. Intel. ***22 **1330–4

[20] Remondino F and El-Hakim S 2006 Image-based 3D modelling: a review *Photogramm. Rec. ***21 **269–91

[21] Salvi J, Armangue X and Batlle J 2002 A comparative review of camera calibrating methods with accuracy evaluation *Pattern Recognit. ***35 **1617–35

[22] Moré J J 1978 The Levenberg–Marquardt algorithm: implementation and theory *Numerical Analysis *(Berlin: Springer) pp 105–16

11