Professional Documents
Culture Documents
Biopsy Aid
Biopsy Aid
Biopsy Aid
Abstract: Inertial measurement units are widely used for sensing orientation in different
fields, from medical applications and vehicle control to motion capture and virtual reality. The
practice of applications is elaborated for many use cases, and the characteristics of the IMU-
based measurements are deeply investigated, especially since the invention of MEMS-based
IMU devices. However, the theoretical basis of applications in which the precise direction or
complete orientation of a tool must be tracked with respect to an exact reference frame (e.g., a
medical imaging device, anatomical reference, or a particular fixture) has not been covered in
the literature, and commercial implementations are also limited to the trivial settings. In this
paper, calibration and error display methods are derived along with error distribution analysis
and experimental validation to give a generic yet focused discussion. The discussed approach
presents a general framework for IMU-based orientation tracking.
The heading directions in the reference frame can be q(world,tool) (t) = q(world,ref ) ·
computed as q(ref,sensor) (t) · q(sensor,tool) . (20)
(ref ) (ref,sensor) (sensor)
iA = q (tA ) · nheading , (9)
Then the angle of the tool can be computed as
(ref ) (sensor)
iB = q(ref,sensor) (tB ) · nheading . (10) φcomputed (t) = angle((q(world,target) )−1 q(world,tool) (t))
(21)
This way, the iA and iB directions are known in frames assuming ideal calibration and orientation sensor.
world and ref , so q(world,ref ) can be computed based on
Subsection 3.1.
4. METHODS TO DISPLAY ERRORS
From the computed quantities and current measurement
q(ref,sensor) (t), the current heading direction by frame 4.1 Directional tools
world can be computed as
(world) (sensor) For displaying the current error (magnitude and direction
nheading (t) = q(world,ref ) · q(ref,sensor) (t) · nheading .
as well), define an x−y plane perpendicular to ntarget then
(11) project nheading to this plane and display its coordinates,
Then the angle of the current heading can be computed as see Fig. 6.
φcomputed (t) = arccos(nheading (t)T · ntarget ). (12) For the sake of simplicity, assume that the preferred
(It is the computed heading angle, that can be measured y direction is the vertical (denoted as yproto ), and the
and computed assuming accurate measurements and cali- preferred x direction is a horizontal direction (denoted as
brations.) xproto )..
y • inaccurate calibrator device placement, denote its
x error δcalib ,
nheading(t)
• inaccurate device placement into the sockets (includ-
ntarget ing the error of socket geometries), denote its error
δsocket .
Fig. 6. Display of current heading direction (nheading ) According to the nonlinear nature of the computations,
the error of the overall system is investigated via Monte
If the target direction is far from vertical, directions x and Carlo simulations. The following angle error ranges are
y can be computed as considered assuming uniform distribution:
x = (yproto × ntarget )norm , (22) δmeas ≤ 1.5[deg], (26)
y = ntarget × x, δcalib ≤ 0.5[deg], (27)
otherwise as δsocket ≤ 0.1[deg], (28)
y = (ntarget × xproto )norm , (23)
x = y × ntarget . Directional tool This section investigates how the com-
puted heading direction can differ from the real direction
Then the coordinates on the plane are caused by inaccurate measurements, this difference is re-
T
x ferred to as computational angle error ε.
c(t) = T nheading (t). (24)
y A series of random calibrations (N = 106 ) were per-
formed with the previously described error characteris-
4.2 Nonsymmetric tools (sensor)
tics. The distributions of angle errors of nheading and
For nonsymmetric tools, there can also be defined a q(world,ref ) are represented in Fig. 8. The mean angle error
(sensor)
heading direction nheading in its tool frame, around which of nheading direction is 0.62[deg], and the mean angle error
its rotation is the most simple. Furthermore, define two of q(world,ref ) computed orientation is 0.94[deg].
directions i and j perpendicular to it and each other, also
given in frame tool.
104
2.5
For the currently targeted q(world,tool) , compute
2
(world) (world,tool) (tool)
nheading,target = qtarget · nheading , (25)
1.5
and directions x and y as in the previous subsection. Then
N
1
the angle error of this heading direction can be shown as
(24). 0.5
0
In order to visualize the error of rotation around nheading , 0 0.5 1 1.5 2 2.5
display the targeted i and j vectors and their current 4
10
direction at the projected heading vector as seen in Fig. 7. 4
y 3
displayed x j(t)
current i(t)
N
2
orientation
nheading(t)
1
nheading,target
0
jtarget 0 0.5 1 1.5 2 2.5 3 3.5 4
itarget displayed
target
orientation Fig. 8. Error distribution of calibration for directional
sensors (assuming errors (26))
Fig. 7. Display of current and target orientation via
nheading and i, j vectors
For each calibration, eight measurements were simulated in
different device poses. The ε angle difference of computed
5. NUMERICAL ANALYSIS (world)
and real nheading are represented in Figure 9. Even with
5.1 Numerical simulations such an inaccurate device, the error is below 6.28[deg], and
its expected value is around 1.09[deg].
This section demonstrates the achievable precision using
the proposed calibration method considering the inaccu- Nonsymmetric tool In this case, the measure of differ-
racy of the applied sensor and the uncertainties of the ence is the angle of the computed frame and the real frame.
calibration process as well. Similarly, N = 106 random calibrations were performed
In practical scenarios, the inaccuracies may come from with eight simulated measurements in each of them. The
error of calibrated quantities shows distributions plotted
• error of orientation measurement, denote its angle in Figure10. The mean of angle error of q(sensor,tool) is
δmeas , 0.73[deg] and the mean error of q(world,ref ) is 0.78[deg].
4
105 turned off because of the magnetic field of the manipula-
tor. During the experiment preparation, we observed that
3 the manipulator’s micro-vibrations could significantly de-
crease the sensor’s performance in terms of noise and drift
in the resulting signal. For this reason, it was fixed to the
N
2
manipulator in silicon bedding instead of rigid mounting.
1
0
0 1 2 3 4 5 6 7
z[mm]
800
600
2
400
N
0 500
0 0
0 0.5 1 1.5 2 2.5 3 -500
-500
4 y[mm]
10 -1000 x[mm]
3
(world,tool)
Fig. 10. Error distribution of calibration for nonsymmet- q1 = 0.4997 + 0.5003i − 0.5011j + 0.4989k, (29)
rical sensors (assuming errors (26)) (world,tool)
q2 = 0.0006 − 0.0005i − 0.7081j + 0.7061k, (30)
The angle error of computed q(world,tool) based on the (world,tool)
q3 = 0.0008 + 0.0000i − 0.0014j + 1.0000k. (31)
measured orientation distributed as it is shown in Figure
11 with mean value 1.29[deg]. First, it was rotated around zbase by π/2, then around
105 xbase by π/2. The performed path is plotted in Figure
3
12. After the initial segment, five complete loops were
performed. The figure shows the orientation in each mea-
2 surement point with red line segments in the x direction
and blue color in the z direction.
N
[deg]
6
5
4
0 0
500 1000 1500 2000 2500 3000 500 1000 1500 2000 2500 3000
Measurements Measurements
(world)
Fig. 13. The angle error of nheading of the measurements Fig. 16. The angle error of q(world,tool) of the measure-
after calibration. The angular velocity state is also ments after calibration. The angular velocity state is
shown via the change of orientation since the last also shown via the change of orientation since the last
measurement. measurement.
2.5 2.5
Change of rotation Change of rotation
Angle error of n heading 2 Angle error of q (world,tool)
2
1.5 1.5
[deg]
[deg]
1 1
0.5 0.5
0 0
200 400 600 800 1000 1200 1400 1600 1800 2000 200 400 600 800 1000 1200 1400 1600 1800 2000
Measurements Measurements
(world)
Fig. 14. The angle error of nheading of the measurements Fig. 17. The angle error of q(world,tool) of the measure-
after calibration omitting the fast transient (dφ > ments after calibration omitting the fast transient
0.5[deg]) phases. Angular velocity is shown via the (dφ > 0.5[deg]) measurements. Angular velocity is
change of orientation since the last measurement. shown via the change of orientation since the last
measurement.
250
200
200
150
150
N
100
100
50 50
0
0 0.5 1 1.5 2 2.5 3 0
0 0.5 1 1.5 2 2.5 3 3.5 4
Angle error of nheading [deg]
Angle error of q(world,tool) [deg]
(world)
Fig. 15. Error distribution of nheading omitting the fast Fig. 18. Error distribution of q(world,tool) omitting the
(dφ > 0.5[deg]) transient measurements. Its maximal fast (dφ > 0.5[deg]) transient measurements. The
value is 2.38[deg] and its mean value is 0.54[deg]. maximum error is 2.44[deg] and its mean value is
0.98[deg].
Nonsymmetric tool validation The sensor and the mea-
surement can be considered as a nonsymmetric setup. In 6. CONCLUSION
this case, the calibration process described in Subsection
4.2 is used. After obtaining q(world,ref ) and q(sensor,tool) The paper proposed an established computation frame-
the measurements can be evaluated via (20). It is com- work including calibration and error display that supports
pared to the q(world,tool) orientation provided by the robot. the usage of IMU sensors for orientation tracking con-
The angle of the two orientations is plotted in Figure 16. cerning an external reference. The method considers two
The transients show large errors in this case too, for this practically important use cases: Direction tracking (ax-
reason, only the quasi-static measurements are considered, isymmetric heading tracking) and nonsymmetrical track-
where the change of orientation is less than the 0.5[deg] ing (full orientation tracking). In both cases, the proposed
between two measurements. These results are plotted in calibration procedure acquires three orientations to enable
Figure 17. the referencing for an external fixture and the calibration
of the rotation between the tool and the ad-hoc attached
The errors are mostly below 2[deg], Figure 18 displays it sensor. The propagating effect of measurement and cal-
in a histogram. In this case, the errors are more significant ibration error was investigated and illustrated via simu-
because the rotation around the heading direction is not lated error distributions and experimental measurements.
considered in the former subsection. However, in this case, The analysis showed the effectiveness of currently available
it is also relevant. MEMS-based IMU units with the proposed methods.
REFERENCES Engin. in Medicine and Biology Soc., 6270–6273.
Zuñiga-Noël, D., Moreno, F.A., and Gonzalez-Jimenez, J.
Abyarjoo, F., Barreto, A., Cofino, J., and Ortega, F.R. (2021). An analytical solution to the IMU initialization
(2015). Implementing a Sensor Fusion Alg. for 3D problem for visual-inertial systems. IEEE Rob. and Aut.
Orientation Detection with Inertial/Magnetic Sens. In Let., 6(3), 6116–6122.
T. Sobh and K. Elleithy (eds.), Innovations and Ad-
vances in Comp., Inf., Syst. Sci., Netw. and Engin., Lec- APPENDIX
ture Notes in Electrical Engineering, 305–310. Springer
International Publishing, Cham. The orientations will be described via unit quaternions.
Beange, K.H.E., Chan, A.D.C., and Graham, R.B. (2018). The main operations for their usage are discussed here.
Evaluation of wearable imu performance for orientation Denote the unit quaternion as
estimation and motion tracking. In 2018 IEEE Int.
Symp. on Medical Meas. and App. (MeMeA), 1–6. q = w + xi + yj + zk,
Borges, M., Symington, A., Coltin, B., Smith, T., and where w2 + x2 + y 2 + z 2 = 1. The related rotation matrix
Ventura, R. (2018). HTC vive: Analysis and accuracy can be written as
1 − 2(y 2 + z 2 ) 2xy − 2zw
improvement. In 2018 IEEE/RSJ Int. Conf. on Int. 2xz + 2yw
Robots and Syst. (IROS), 2610–2615. IEEE. Rq = 2xy + 2zw 1 − 2(x2 + z 2 ) 2zy − 2xw .
Cavallo, A., Cirillo, A., Cirillo, P., De Maria, G., Falco, 2xz − 2yw 2yz + 2xw 1 − 2(x2 + y 2 )
P., Natale, C., and Pirozzi, S. (2014). Experimental
Comparison of Sensor Fusion Algorithms for Attitude The quaternion form a rotation matrix can be computed
Estimation. IFAC Proc. Vol., 47(3), 7585–7591. via the Stephenson-formula, see Shepperd (1978). The
Chen, Y., Fu, C., Leung, W.S.W., and Shi, L. (2020). quaternion of a rotated orientation around axis t by angle
Drift-Free and Self-Aligned IMU-Based Human Gait φ is
Tracking System With Augmented Precision and Ro- q = cos(φ/2) + sin(ϕ/2)(tx i + ty j + tz k),
bustness. IEEE Robotics and Automation Letters, 5(3). and its opposite too. The angle from a quaternion can be
Laidig, D., Caruso, M., Cereatti, A., and Seel, T. (2021). computed as
BROAD—A Benchmark for Robust Inertial Orientation
q
φ = 2 · atan2( x23 + y32 + z32 , w3 ),
Estimation. Data, 6(7).
Li, W., Zhang, L., Sun, F., Yang, L., Chen, M., and Li, and the axis from the direction of the complex part, taking
Y. (2015). Alignment calibration of IMU and Doppler into account the sign of the real part.
sensors for precision INS/DVL integrated navigation.
Optik, 126(23), 3872–3876. The rotation of a vector v by a q quaternion (q · v) can
Mahdi, A.E., Azouz, A., Abdalla, A., and Abosekeen, A. be computed as Rq · v matrix product.
(2022). IMU-Error Estimation and Cancellation Using If a vector represents a v direction described by frame A,
ANFIS for Improved UAV Navigation. In 2022 13th Int. it is written as v(A) .
Conf. on Electr. Eng. (ICEENG), 120–124. IEEE.
Nazarahari, M. and Rouhani, H. (2021). Sensor fusion Furthermore, the indexing of rotation matrices and quater-
algorithms for orientation tracking via magnetic and in- nions must be understood as
ertial measurement units: An experimental comparison v(B) = q(B,A) · v(A) , and
survey. Information Fusion, 76, 8–23. v(B) = R(B,A) · v(A) .
Nebot, E. and Durrant-Whyte, H. (1997). Initial calibra-
tion and alignment of an inertial navigation. In Proc. The quaternion that represents the inverse rotation can be
Fourth Annual Conf. on Mech. and Mach. Vision in computed as
Prac., 175–180. IEEE. q−1 = −w + xi + yj + zk.
Ricci, L., Taffoni, F., and Formica, D. (2016). On the
Orientation Error of IMU: Investigating Static and Denote another unit quaternion as
Dynamic Accuracy Targeting Human Motion. PLOS
q2 = w2 + x2 i + y2 j + z2 k,
ONE, 11(9).
Shepperd, S.W. (1978). Quaternion from rotation matrix. The product of rotations q and q2 for computations like
Journal of Guidance and Control, 1(3), 223–224. q · (q2 · v), can be computed as
Wang, X. (2009). Fast alignment and calibration algo-
rithms for inertial navigation system. Aerospace sci. q·q2 = ww2 −xx2 −yy2 −zz2 +i(wx2 +xw2 −yz2 +zy2 )+
and tech., 13(4-5), 204–209. + j(wy2 + xz2 + yw2 − zx2 ) + k(wz2 − xy2 + yx2 + zw2 ).
Wen, Z., Yang, G., and Cai, Q. (2021). An Improved
Calibration Method for the IMU Biases Utilizing KF- The angle of orientations A and B can be computed as the
Based AdaGrad Algorithm. Sensors, 21(15). angle of quaternion q(A,B) .
Zhang, S., Yu, S., Liu, C., Yuan, X., and Liu, S. (2016). A The normalization of a vector will be denoted as
dual-linear kalman filter for real-time orientation deter-
mination system using low-cost mems sensors. Sensors, (v)norm = v/||v||.
16(2), 264. The angle of two vectors v, v2 is computed as
Zihajehzadeh, S., Loh, D., Lee, M., Hoskinson, R., and
Park, E. (2014). A cascaded two-step kalman filter for φ = acos(vT v2 /(||v|| · ||v2 ||)).
estimation of human body segment orientation using
mems-imu. In 2014 36th Annual Int. Conf. of the IEEE