Biopsy Aid

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 7

Method for Direction and Orientation

Tracking Using IMU Sensor ⋆


József Kuti ∗ Tamás Piricz ∗ Péter Galambos ∗

The authors are with the Antal Bejczy Centre for Intelligent
Robotics, Óbuda University, Budapest, Hungary.

Abstract: Inertial measurement units are widely used for sensing orientation in different
fields, from medical applications and vehicle control to motion capture and virtual reality. The
practice of applications is elaborated for many use cases, and the characteristics of the IMU-
based measurements are deeply investigated, especially since the invention of MEMS-based
IMU devices. However, the theoretical basis of applications in which the precise direction or
complete orientation of a tool must be tracked with respect to an exact reference frame (e.g., a
medical imaging device, anatomical reference, or a particular fixture) has not been covered in
the literature, and commercial implementations are also limited to the trivial settings. In this
paper, calibration and error display methods are derived along with error distribution analysis
and experimental validation to give a generic yet focused discussion. The discussed approach
presents a general framework for IMU-based orientation tracking.

Keywords: Inertial measurement unit, IMU, orientation estimation

1. INTRODUCTION sensor, or a single calibrated orientation as a reference


(Zuñiga-Noël et al., 2021). The reference is usually reg-
Precise and high-frequency sensing of spatial orientation istered via a simple procedure that associates the sensor
is crucial in many applications, e.g., in certain medical output with a concrete physical alignment of the tracking
interventions (biopsy, ablation, nephrostomy, etc.), hu- device (Nebot and Durrant-Whyte, 1997; Wang, 2009).
man motion tracking (character animation, rehabilitation, These approaches are viable only when certain geometric
training), virtual and augmented reality (head tracking) assumptions are held, and the required accuracy is not
vehicle tracking (drones, UAVs, self-driving). Since the critical. Applicable geometric assumptions can be, for ex-
low-cost, integrated, miniature MEMS-type IMU sensor ample, the horizontal alignment of the device w.r.t. the
packages are widely available, several studies have been gravitational field or the known geometric relation of the
published introducing algorithms related to orientation IMU package to the tracked device (e.g., in a VR headset).
measurements. Some of these studies aim to improve In practice, these assumptions only hold to a certain (rel-
the accuracy in terms of bias and drift characteristics atively low) accuracy, significantly below the achievable
of the sensors (Chen et al., 2020; Mahdi et al., 2022; precision of the currently available IMU sensors.
Wen et al., 2021), while other papers deal with the sen-
The authors could not find published methods that allow
sor fusion methods utilizing all the accelerometer, gyro-
accurate exact alignment of the sensor reference frame,
scope, and magnetometer signals to predict the orientation
and the tool direction without using the before-mentioned
(Cavallo et al., 2014; Abyarjoo et al., 2015; Nazarahari
assumptions. This paper studies the case when a device’s
and Rouhani, 2021). Among these approaches, variants of
orientation or a specific functional direction of the device
Kalman Filtering (Zhang et al., 2016; Zihajehzadeh et al.,
is tracked concerning a physical fixture as a reference,
2014), Fuzzy logic, and various machine learning methods
assuming that the geometric relationship of the device and
are studied, to mention a few directions. Another group
the sensor package is unknown before the calibration. The
of papers contributes to a better understanding of the
practical relevance of this situation is essential in any high-
accuracy of MEMS-based IMU sensors (Ricci et al., 2016;
precision application where the sensor package is attached
Beange et al., 2018; Laidig et al., 2021).
to the tracked device in an ad-hoc manner, i.e., just before
Most use cases are based purely on the continuous tracking use. This situation is quite usual, for example, in medical
of orientation with respect to an initial state (Borges et al., applications where the sensor unit is attached to a medical
2018; Li et al., 2015), randomly set upon powering the device by the operator through the preparation procedure
⋆ This work was supported by the ÚNKP-22-5 New National Excel- of an intervention.
lence Program of the Ministry for Innovation and Technology from The following section describes the problem and the mea-
the source of the National Research, Development and Innovation surement situation. Section 3 details the proposed calibra-
Fund and it was included by project no. 2019-1.3.1-KK-2019-00007 tion methods. Then, Section 4 drafts methods for display-
with the support provided from the National Research, Development
ing the current difference from the target orientation to
and Innovation Fund of Hungary, financed under the 2019-1.3.1-KK
funding scheme. Péter Galambos is a Bolyai Fellow of the Hungarian
the operator. Section 5 shows numerical simulations and
Academy of Sciences. physical measurements to reveal how the inaccuracies of
the sensor and the calibration influence the error distribu- 2.2 Sensor arrangement
tion. Finally, Section 6 concludes the results.
An orientation sensor (e.g., a MEMS IMU unit) is sup-
posed to be rigidly mounted on the device. It measures the
2. BASIC CONCEPTS orientation according to a reference frame set up during
initialization, denoted as R(ref,sensor) (t) (or in the form of
2.1 Problem description quaternion q(ref,sensor) (t)). The relation of this reference
frame to the world frame is not known.
Device with one functional direction In this use case,
there is a direction denoted as ntarget given in the world In the axisymmetrical case, the heading direction in the
frame. In order to perform the specific operation, the sensor frame is another quantity to be calibrated; see Fig.
nheading direction of the device must be oriented toward 3.
this direction. For example, if the device is a biopsy reference sensor nheading
needle, the heading direction is the direction of the needle.
The angle difference of the directions (direction error) is
denoted with φ, see Figure 1.
(ref,sen)
world R (t)
z
x Fig. 3. The sensor frame and its reference in axisymmetri-
nheading cal case
y P φ
ntarget In the nonsymmetric case, the sensor-to-tool rotation
needle
must be determined in addition to the reference-to-world
rotation; see Fig. 4.

reference sensor tool


Fig. 1. Problem illustration in the case of needle biopsy
(ref,sen)
In the needle biopsy use case, if the biopsy aims to reach a R (t)
spherical region of radius R in distance from the puncture
point of l, the direction error must be smaller than a limit Fig. 4. The sensor frame, its reference, and the tool frame
value in nonsymmetrical case
 
R
εmax = arctan . (1)
l
3. THE PROPOSED CALIBRATION METHODS
This use case we can also refer to as an axisymmetrical
case. First, a simplified method is presented, which determines
the orientation of two frames by acquiring raw sensor
Device with multiple functional directions This use case output for two known directions described with respect
is considered when the frame defined with respect to the to both frames. Based on that, a calibration method is
device must be aligned precisely to a target orientation; see defined for the axisymmetrical use case. Following that, a
Figure 2. Such a device can be, for example, an ultrasonic method for the nonsymmetric case is introduced.
transducer or a femoral stem. This use case we can also
refer to as a nonsymmetrical case. 3.1 Computation of frame orientations based on known
directions
world
z Assume, that there are two directions (t1 and t2 ) that are
x tool known in two frames, denote them by A and B.

y The following orthogonal directions can be defined based


P on them
target i1 = (t1 + t2 )norm , (2)
i2 = (t1 × t2 )norm ,
i3 = i1 × i2 ,
that constructs a frame (see Fig. 5), denote it by i, for the
Fig. 2. Problem illustration in the case of a nonsymmetric sake of simplicity.
tool
By determining these base vectors by frame A, the follow-
In this case, it can also be assumed that a heading direction ing rotation matrix will describe the link of frame A and
is given in the tool frame. The error can be characterized i
by the angle of the current and the target tool frame. R(A,i) = [i1 i2 i3 ] . (3)
i3 3.3 Calibration of nonsymmetric tools
t1 i2 Define two different (far from parallel) directions t1 and
i1 t2 in the world frame.
t2 Perform three measurements as the followings:
• ”A” : placed arbitrarily,
Fig. 5. Construction of frame i based on directions t1 and • ”B” : rotate the orientation of ”A” around t1 by
t2 φ ≈ π/2,
It can be also computed for frame B resulting in R(B,i) . • ”C” : rotate the orientation of ”B” around t2 by
The rotation matrix R(A,B) can be computed as φ ≈ π/2.
R(A,B) = R(A,i) · (R(B,i) )T . (4) Denote the measured q(ref,sensor) orientation at the time
Then quaternion q (A,B)
can also be obtained. moments of the measurements tA , tB and tC as
qA = q(ref,sensor) (tA ), (13)
3.2 Calibration of axisymmetric tools
(ref,sensor)
qB = q (tB ), (14)
Denote two different (far from parallel) directions by qC = q (ref,sensor)
(tC ). (15)
iA and iB with known coordinates in the world frame.
Perform the following three measurements: Then the axis of rotation in the sensor frame at tB can be
• ”A” : heading to direction A, computed as
• ”Arot” : heading to direction A, but rotated around (sensor,B)
t1 = axis(q−1
A · qB ), (16)
it by φ ≈ π/2 (sensor,B)
• ”B” : heading to direction B. t2 = axis(q−1
· qC ).
B (17)
Define the frame tool purposefully and describe the di-
Denote the measured q(ref,sensor) orientation at the time
rections t1 and t2 similarly as it is done at tB denoting
moments of the three measurements tA , tArot and tB as (tool,B) (tool,B)
them by t1 and t2 . Based on them, the rota-
qA = q(ref,sensor) (tA ), (5) (sensor,tool)
tion q can be computed by using the method
qArot = q(ref,sensor) (tArot ), (6) described in Subsection 3.1.
(ref,sensor)
qB = q (tB ). (7) The coordinates of these directions in the reference frame
can be computed as
Compute the axis of rotation of
(ref ) (sensor,B)
(sensor)
nheading = axis(q−1 t1 = qB · t1 , (18)
A · qArot ). (8)
(ref ) (sensor,B)
t2 = qB · t2 . (19)
(sensor)
The computed nheading heading direction may be flipped, (world) (world)
Based on them and t1 , t2 , the
rotation
this can be checked based on (world,ref )
q can be computed by using method in Subsec-
• approximate information about the heading direction tion 3.1.
(e.g., it must be around x+ direction),
• ensuring that only φ ≈ π/2+2kπ rotation around the From the current measurements, the orientation of frame
axis is allowed, the φ ≈ 3π/2 + 2kπ is not possible. tool by frame world can be obtained as

The heading directions in the reference frame can be q(world,tool) (t) = q(world,ref ) ·
computed as q(ref,sensor) (t) · q(sensor,tool) . (20)
(ref ) (ref,sensor) (sensor)
iA = q (tA ) · nheading , (9)
Then the angle of the tool can be computed as
(ref ) (sensor)
iB = q(ref,sensor) (tB ) · nheading . (10) φcomputed (t) = angle((q(world,target) )−1 q(world,tool) (t))
(21)
This way, the iA and iB directions are known in frames assuming ideal calibration and orientation sensor.
world and ref , so q(world,ref ) can be computed based on
Subsection 3.1.
4. METHODS TO DISPLAY ERRORS
From the computed quantities and current measurement
q(ref,sensor) (t), the current heading direction by frame 4.1 Directional tools
world can be computed as
 
(world) (sensor) For displaying the current error (magnitude and direction
nheading (t) = q(world,ref ) · q(ref,sensor) (t) · nheading .
as well), define an x−y plane perpendicular to ntarget then
(11) project nheading to this plane and display its coordinates,
Then the angle of the current heading can be computed as see Fig. 6.
φcomputed (t) = arccos(nheading (t)T · ntarget ). (12) For the sake of simplicity, assume that the preferred
(It is the computed heading angle, that can be measured y direction is the vertical (denoted as yproto ), and the
and computed assuming accurate measurements and cali- preferred x direction is a horizontal direction (denoted as
brations.) xproto )..
y • inaccurate calibrator device placement, denote its
x error δcalib ,
nheading(t)
• inaccurate device placement into the sockets (includ-
ntarget ing the error of socket geometries), denote its error
δsocket .

Fig. 6. Display of current heading direction (nheading ) According to the nonlinear nature of the computations,
the error of the overall system is investigated via Monte
If the target direction is far from vertical, directions x and Carlo simulations. The following angle error ranges are
y can be computed as considered assuming uniform distribution:
x = (yproto × ntarget )norm , (22) δmeas ≤ 1.5[deg], (26)
y = ntarget × x, δcalib ≤ 0.5[deg], (27)
otherwise as δsocket ≤ 0.1[deg], (28)
y = (ntarget × xproto )norm , (23)
x = y × ntarget . Directional tool This section investigates how the com-
puted heading direction can differ from the real direction
Then the coordinates on the plane are caused by inaccurate measurements, this difference is re-
 T
x ferred to as computational angle error ε.
c(t) = T nheading (t). (24)
y A series of random calibrations (N = 106 ) were per-
formed with the previously described error characteris-
4.2 Nonsymmetric tools (sensor)
tics. The distributions of angle errors of nheading and
For nonsymmetric tools, there can also be defined a q(world,ref ) are represented in Fig. 8. The mean angle error
(sensor)
heading direction nheading in its tool frame, around which of nheading direction is 0.62[deg], and the mean angle error
its rotation is the most simple. Furthermore, define two of q(world,ref ) computed orientation is 0.94[deg].
directions i and j perpendicular to it and each other, also
given in frame tool.
104
2.5
For the currently targeted q(world,tool) , compute
2
(world) (world,tool) (tool)
nheading,target = qtarget · nheading , (25)
1.5
and directions x and y as in the previous subsection. Then
N

1
the angle error of this heading direction can be shown as
(24). 0.5

0
In order to visualize the error of rotation around nheading , 0 0.5 1 1.5 2 2.5
display the targeted i and j vectors and their current 4
10
direction at the projected heading vector as seen in Fig. 7. 4

y 3
displayed x j(t)
current i(t)
N

2
orientation
nheading(t)
1
nheading,target
0
jtarget 0 0.5 1 1.5 2 2.5 3 3.5 4
itarget displayed
target
orientation Fig. 8. Error distribution of calibration for directional
sensors (assuming errors (26))
Fig. 7. Display of current and target orientation via
nheading and i, j vectors
For each calibration, eight measurements were simulated in
different device poses. The ε angle difference of computed
5. NUMERICAL ANALYSIS (world)
and real nheading are represented in Figure 9. Even with
5.1 Numerical simulations such an inaccurate device, the error is below 6.28[deg], and
its expected value is around 1.09[deg].
This section demonstrates the achievable precision using
the proposed calibration method considering the inaccu- Nonsymmetric tool In this case, the measure of differ-
racy of the applied sensor and the uncertainties of the ence is the angle of the computed frame and the real frame.
calibration process as well. Similarly, N = 106 random calibrations were performed
In practical scenarios, the inaccuracies may come from with eight simulated measurements in each of them. The
error of calibrated quantities shows distributions plotted
• error of orientation measurement, denote its angle in Figure10. The mean of angle error of q(sensor,tool) is
δmeas , 0.73[deg] and the mean error of q(world,ref ) is 0.78[deg].
4
105 turned off because of the magnetic field of the manipula-
tor. During the experiment preparation, we observed that
3 the manipulator’s micro-vibrations could significantly de-
crease the sensor’s performance in terms of noise and drift
in the resulting signal. For this reason, it was fixed to the
N

2
manipulator in silicon bedding instead of rigid mounting.
1

0
0 1 2 3 4 5 6 7

Fig. 9. Error distribution of measurements for directional


1200
sensors (assuming errors (26))
1000
104
3

z[mm]
800

600
2
400
N

0 500
0 0
0 0.5 1 1.5 2 2.5 3 -500
-500
4 y[mm]
10 -1000 x[mm]
3

Fig. 12. 3D TCP path used for the measurement. The


2 red and blue line segments show direction x and z
respectively.
N

Initializing the experiment, three orientations were set for


0 the calibration:
0 0.5 1 1.5 2 2.5 3

(world,tool)
Fig. 10. Error distribution of calibration for nonsymmet- q1 = 0.4997 + 0.5003i − 0.5011j + 0.4989k, (29)
rical sensors (assuming errors (26)) (world,tool)
q2 = 0.0006 − 0.0005i − 0.7081j + 0.7061k, (30)
The angle error of computed q(world,tool) based on the (world,tool)
q3 = 0.0008 + 0.0000i − 0.0014j + 1.0000k. (31)
measured orientation distributed as it is shown in Figure
11 with mean value 1.29[deg]. First, it was rotated around zbase by π/2, then around
105 xbase by π/2. The performed path is plotted in Figure
3
12. After the initial segment, five complete loops were
performed. The figure shows the orientation in each mea-
2 surement point with red line segments in the x direction
and blue color in the z direction.
N

Axisymmetric tool validation Performing the calibration


0 process described in Subsection 4.1, the measurements
0 1 2 3 4 5 6
can be evaluated as in (11). The calibration can also be
performed for the data provided by the robot controller,
Fig. 11. Error distribution of measurements for nonsym- which is considered ground truth. Following that, the angle
metrical sensors (assuming errors (26)) of the two measurements can be computed. The results are
plotted in Fig. 13.
5.2 Experimental results with ICM20948 IMU unit The figure shows large error peaks up to 13[deg], but these
peaks appear only at fast transient segments of the plot.
This section shows the application of the proposed method Most likely, the large errors are caused by imperfectly
by utilizing an IMU sensor module, and a UR16e robot 1 synchronized measurements. Nevertheless, from a practical
arm as ground truth. The used ICM20948 IMU sen- viewpoint, static precision is way more relevant. For this
sor 2 integrates gyroscope, accelerometer, and magnetome- reason, the following plot in Figure 14 shows only mea-
ter units. For this measurement, the magnetometer was surements with rotation change less than 0.5[deg] between
1 Universal Robot UR16e, https:www.universal- the measurements.
robots.comproductsur16-robot
2 ICM-20948 World’s Lowest Power 9-Axis MEMS MotionTracking In this case, the error of the heading direction is mostly less
Device, https:invensense.tdk.comproductsmotion-tracking9-axisicm- than 1.5[deg] with mean value 0.54[deg], as it is highlighted
20948 by the histogram of Figure 15.
14 15
Change of rotation Change of rotation
12
Angle error of n heading Angle error of q (world,tool)
10
10
8
[deg]

[deg]
6
5
4

0 0
500 1000 1500 2000 2500 3000 500 1000 1500 2000 2500 3000
Measurements Measurements

(world)
Fig. 13. The angle error of nheading of the measurements Fig. 16. The angle error of q(world,tool) of the measure-
after calibration. The angular velocity state is also ments after calibration. The angular velocity state is
shown via the change of orientation since the last also shown via the change of orientation since the last
measurement. measurement.
2.5 2.5
Change of rotation Change of rotation
Angle error of n heading 2 Angle error of q (world,tool)
2

1.5 1.5

[deg]
[deg]

1 1

0.5 0.5

0 0
200 400 600 800 1000 1200 1400 1600 1800 2000 200 400 600 800 1000 1200 1400 1600 1800 2000
Measurements Measurements

(world)
Fig. 14. The angle error of nheading of the measurements Fig. 17. The angle error of q(world,tool) of the measure-
after calibration omitting the fast transient (dφ > ments after calibration omitting the fast transient
0.5[deg]) phases. Angular velocity is shown via the (dφ > 0.5[deg]) measurements. Angular velocity is
change of orientation since the last measurement. shown via the change of orientation since the last
measurement.
250
200
200
150
150
N

100
100

50 50

0
0 0.5 1 1.5 2 2.5 3 0
0 0.5 1 1.5 2 2.5 3 3.5 4
Angle error of nheading [deg]
Angle error of q(world,tool) [deg]
(world)
Fig. 15. Error distribution of nheading omitting the fast Fig. 18. Error distribution of q(world,tool) omitting the
(dφ > 0.5[deg]) transient measurements. Its maximal fast (dφ > 0.5[deg]) transient measurements. The
value is 2.38[deg] and its mean value is 0.54[deg]. maximum error is 2.44[deg] and its mean value is
0.98[deg].
Nonsymmetric tool validation The sensor and the mea-
surement can be considered as a nonsymmetric setup. In 6. CONCLUSION
this case, the calibration process described in Subsection
4.2 is used. After obtaining q(world,ref ) and q(sensor,tool) The paper proposed an established computation frame-
the measurements can be evaluated via (20). It is com- work including calibration and error display that supports
pared to the q(world,tool) orientation provided by the robot. the usage of IMU sensors for orientation tracking con-
The angle of the two orientations is plotted in Figure 16. cerning an external reference. The method considers two
The transients show large errors in this case too, for this practically important use cases: Direction tracking (ax-
reason, only the quasi-static measurements are considered, isymmetric heading tracking) and nonsymmetrical track-
where the change of orientation is less than the 0.5[deg] ing (full orientation tracking). In both cases, the proposed
between two measurements. These results are plotted in calibration procedure acquires three orientations to enable
Figure 17. the referencing for an external fixture and the calibration
of the rotation between the tool and the ad-hoc attached
The errors are mostly below 2[deg], Figure 18 displays it sensor. The propagating effect of measurement and cal-
in a histogram. In this case, the errors are more significant ibration error was investigated and illustrated via simu-
because the rotation around the heading direction is not lated error distributions and experimental measurements.
considered in the former subsection. However, in this case, The analysis showed the effectiveness of currently available
it is also relevant. MEMS-based IMU units with the proposed methods.
REFERENCES Engin. in Medicine and Biology Soc., 6270–6273.
Zuñiga-Noël, D., Moreno, F.A., and Gonzalez-Jimenez, J.
Abyarjoo, F., Barreto, A., Cofino, J., and Ortega, F.R. (2021). An analytical solution to the IMU initialization
(2015). Implementing a Sensor Fusion Alg. for 3D problem for visual-inertial systems. IEEE Rob. and Aut.
Orientation Detection with Inertial/Magnetic Sens. In Let., 6(3), 6116–6122.
T. Sobh and K. Elleithy (eds.), Innovations and Ad-
vances in Comp., Inf., Syst. Sci., Netw. and Engin., Lec- APPENDIX
ture Notes in Electrical Engineering, 305–310. Springer
International Publishing, Cham. The orientations will be described via unit quaternions.
Beange, K.H.E., Chan, A.D.C., and Graham, R.B. (2018). The main operations for their usage are discussed here.
Evaluation of wearable imu performance for orientation Denote the unit quaternion as
estimation and motion tracking. In 2018 IEEE Int.
Symp. on Medical Meas. and App. (MeMeA), 1–6. q = w + xi + yj + zk,
Borges, M., Symington, A., Coltin, B., Smith, T., and where w2 + x2 + y 2 + z 2 = 1. The related rotation matrix
Ventura, R. (2018). HTC vive: Analysis and accuracy can be written as
1 − 2(y 2 + z 2 ) 2xy − 2zw
 
improvement. In 2018 IEEE/RSJ Int. Conf. on Int. 2xz + 2yw
Robots and Syst. (IROS), 2610–2615. IEEE. Rq =  2xy + 2zw 1 − 2(x2 + z 2 ) 2zy − 2xw  .
Cavallo, A., Cirillo, A., Cirillo, P., De Maria, G., Falco, 2xz − 2yw 2yz + 2xw 1 − 2(x2 + y 2 )
P., Natale, C., and Pirozzi, S. (2014). Experimental
Comparison of Sensor Fusion Algorithms for Attitude The quaternion form a rotation matrix can be computed
Estimation. IFAC Proc. Vol., 47(3), 7585–7591. via the Stephenson-formula, see Shepperd (1978). The
Chen, Y., Fu, C., Leung, W.S.W., and Shi, L. (2020). quaternion of a rotated orientation around axis t by angle
Drift-Free and Self-Aligned IMU-Based Human Gait φ is
Tracking System With Augmented Precision and Ro- q = cos(φ/2) + sin(ϕ/2)(tx i + ty j + tz k),
bustness. IEEE Robotics and Automation Letters, 5(3). and its opposite too. The angle from a quaternion can be
Laidig, D., Caruso, M., Cereatti, A., and Seel, T. (2021). computed as
BROAD—A Benchmark for Robust Inertial Orientation
q
φ = 2 · atan2( x23 + y32 + z32 , w3 ),
Estimation. Data, 6(7).
Li, W., Zhang, L., Sun, F., Yang, L., Chen, M., and Li, and the axis from the direction of the complex part, taking
Y. (2015). Alignment calibration of IMU and Doppler into account the sign of the real part.
sensors for precision INS/DVL integrated navigation.
Optik, 126(23), 3872–3876. The rotation of a vector v by a q quaternion (q · v) can
Mahdi, A.E., Azouz, A., Abdalla, A., and Abosekeen, A. be computed as Rq · v matrix product.
(2022). IMU-Error Estimation and Cancellation Using If a vector represents a v direction described by frame A,
ANFIS for Improved UAV Navigation. In 2022 13th Int. it is written as v(A) .
Conf. on Electr. Eng. (ICEENG), 120–124. IEEE.
Nazarahari, M. and Rouhani, H. (2021). Sensor fusion Furthermore, the indexing of rotation matrices and quater-
algorithms for orientation tracking via magnetic and in- nions must be understood as
ertial measurement units: An experimental comparison v(B) = q(B,A) · v(A) , and
survey. Information Fusion, 76, 8–23. v(B) = R(B,A) · v(A) .
Nebot, E. and Durrant-Whyte, H. (1997). Initial calibra-
tion and alignment of an inertial navigation. In Proc. The quaternion that represents the inverse rotation can be
Fourth Annual Conf. on Mech. and Mach. Vision in computed as
Prac., 175–180. IEEE. q−1 = −w + xi + yj + zk.
Ricci, L., Taffoni, F., and Formica, D. (2016). On the
Orientation Error of IMU: Investigating Static and Denote another unit quaternion as
Dynamic Accuracy Targeting Human Motion. PLOS
q2 = w2 + x2 i + y2 j + z2 k,
ONE, 11(9).
Shepperd, S.W. (1978). Quaternion from rotation matrix. The product of rotations q and q2 for computations like
Journal of Guidance and Control, 1(3), 223–224. q · (q2 · v), can be computed as
Wang, X. (2009). Fast alignment and calibration algo-
rithms for inertial navigation system. Aerospace sci. q·q2 = ww2 −xx2 −yy2 −zz2 +i(wx2 +xw2 −yz2 +zy2 )+
and tech., 13(4-5), 204–209. + j(wy2 + xz2 + yw2 − zx2 ) + k(wz2 − xy2 + yx2 + zw2 ).
Wen, Z., Yang, G., and Cai, Q. (2021). An Improved
Calibration Method for the IMU Biases Utilizing KF- The angle of orientations A and B can be computed as the
Based AdaGrad Algorithm. Sensors, 21(15). angle of quaternion q(A,B) .
Zhang, S., Yu, S., Liu, C., Yuan, X., and Liu, S. (2016). A The normalization of a vector will be denoted as
dual-linear kalman filter for real-time orientation deter-
mination system using low-cost mems sensors. Sensors, (v)norm = v/||v||.
16(2), 264. The angle of two vectors v, v2 is computed as
Zihajehzadeh, S., Loh, D., Lee, M., Hoskinson, R., and
Park, E. (2014). A cascaded two-step kalman filter for φ = acos(vT v2 /(||v|| · ||v2 ||)).
estimation of human body segment orientation using
mems-imu. In 2014 36th Annual Int. Conf. of the IEEE

You might also like