Download as pdf or txt
Download as pdf or txt
You are on page 1of 61

Navigation systems

Lectuerer

Dr. Akeel Ali Wannas

2021-2022
Lecture 01 Navigation systems Dr. Akeel Ali Wannas

Course Object
Through this course, the students will be able to understand, learn and use science of aviation and
operation of aircrafts for a successful career in aircraft fields etc.

Course content
The course gives the fundamental framework and applications of modern global navigation satellite
systems (GNSS) and inertial navigation systems (INS). The course gives an overview of satellite-based
radio navigation systems such as GPS, GLONASS, and GALILEO.

Learning outcome
The course will introduce the students to principles and requirements for the design and use of modern
navigation systems.

Objectives:
The student can combine knowledge in mathematics, statistics and programming in order to solve
fundamental navigation equations with satellite- and inertial navigation systems.

Recommended previous knowledge


Knowledge of electrical engineering fundamentals, mathematics, statistics and fundamentals of
electronics. Signal processing and parameter estimation.

1|Page
Lecture 01 Navigation systems Dr. Akeel Ali Wannas

Introduction:
Navigation: is a field of study that focuses on the process of monitoring and controlling the movement
of a craft or vehicle from one place to another. The field of navigation includes four general categories:
land navigation, marine navigation, aeronautic navigation, and space navigation.
It is also the term of art used for the specialized knowledge used by navigators to perform navigation
tasks. All navigational techniques involve locating the navigator's position compared to known locations
or patterns.
Navigation, in a broader sense, can refer to any skill or study that involves the determination of position
and direction. In this sense, navigation includes orienteering and pedestrian navigation.

Methods of navigation
Most modern navigation relies primarily on positions determined electronically by receivers collecting
information from satellites. Most other modern techniques rely on finding intersecting lines of position
or LOP.
Methods of navigation have changed through history. Each new method has enhanced the mariner's
ability to complete his voyage. One of the most important judgments the navigator must make is the best
method to use. Some types of navigation are depicted in the table.

Traditional navigation methods include:

In marine navigation, Dead reckoning or DR, in which one Used at all times.
advances a prior position using the ship's course and speed.
The new position is called a DR position. It is generally
accepted that only course and speed determine the DR
position. Correcting the DR position for leeway, current
effects, and steering error result in an estimated position or
EP. An inertial navigator develops an extremely accurate
EP.

In marine navigation, Pilotage involves navigating in When within sight of


restricted/coastal waters with frequent determination of land.
position relative to geographic and hydrographic features.

Land navigation is the discipline of following a route Used at all times.


through terrain on foot or by vehicle, using maps with
reference to terrain, a compass, and other basic navigational
tools and/or using landmarks and signs. Way finding is the
more basic form.

2|Page
Lecture 01 Navigation systems Dr. Akeel Ali Wannas

Celestial navigation involves reducing celestial Used primarily as a


measurements to lines of position using tables, spherical backup
trigonometry, and almanacs. It is primarily used at sea but to satellite and
can also be used on land. other electronic
systems in the open
ocean.

Electronic navigation covers any method of position fixing using electronic means, including:

Radio navigation uses radio waves to determine position by Availability has


either radio direction finding systems or hyperbolic systems, declined due to the
such as Decca, Omega and LORAN-C. development of
accurate GNSS.

Radar navigation uses radar to determine the distance from Primarily when
or bearing of objects whose position is known. This process within radar range of
is separate from radar's use as a collision avoidance system. land.

Satellite navigation uses a Global Navigation Satellite Used in all situations.


System (GNSS) to determine position.

1. Geometric Aspects of mapping


In the process of map-making ellipsoidal or spherical surfaces are used to represent the surface of the
Earth. These curved reference surfaces are then projected on a map formed into a cylinder, cone, or flat
plane (figure 1). Since a map is a small-scale representation of the Earth's surface it is necessary to apply
some kind of scale reduction.
1.1 Reference surfaces
Two main reference surfaces (or Earth figures) are used to approximate the shape of the Earth. One is
called the ellipsoid, the other is the Geoid. The Geoid is the equipotential surface at mean sea level and
is used for measuring heights represented on maps. The starting point for measuring these heights are
mean sea level points established at coastal places. These points represent an approximation to the
Geoid. There are several realizations of local mean sea levels in the world. These are called local
vertical datums or height datums.
The ellipsoid (also called spheroid) provides a relatively simple mathematical figure of the Earth. It is
used to measure locations, the latitude () and longitude (), of points of interest. These locations on the
ellipsoid are then projected onto a mapping plane. There are many different ellipsoids defined in the
world, some well-known are the WGS84, GRS80, International 1924 (also known as Hayford),
Krasovsky, Bessel, or the Clarke 1880 ellipsoid.

3|Page
Lecture 01 Navigation systems Dr. Akeel Ali Wannas

Fig. 2 The process of representing the Earth on a flat map.


To measure locations accurately, the selected ellipsoid should fit the area of interest. Therefore a
horizontal datum (also called geodetic datum) is established, which is an ellipsoid but positioned and
oriented in such a way that it best fits to the area or country of interest. There are a few hundred of these
local horizontal datums defined in the world. Recent years have seen that globalisation is leading to the
definition of global (or geocentic) datums, such as the ITRF or WGS84.

Fig. 2 A cross section of an ellipsoid, used to represent the Earth surface, defined by its semi-major axis
a and semi-minor axis b. For maps at small scales we can use the mathematically simpler sphere.

1.2 Map projections


To produce a map the curved reference surface of the Earth, approximated by an ellipsoid or a sphere, is
transformed to the flat plane of the map by means of a map projection. In other words, each point on the
reference surface of the Earth with geographic coordinates (,) may be transformed to set of Cartesian
coordinates (x, y) or map coordinates representing positions on the map plane.

4|Page
Lecture 01 Navigation systems Dr. Akeel Ali Wannas

Fig. 3 Example of a map projection where the reference surface with geographic coordinates (,) is
projected onto the 2D mapping plane with 2D Cartesian coordinates (x, y).
Hundreds of map projections are developed in order to accurately represent a particular map or to best
suit a particular type of map. Examples of map projections are Transverse Mercator (also known as
Gauss-Krüger), equidistant cylindrical and conic projection, Lambert's azimuthal, conic and cylindrical
projection, stereographic projection, and various others. Map projections are typically classified
according to the geometric surface from which they are derived: cylinder, cone or plane. The three
classes of map projections are respectively cylindrical, conical and azimuthal.

Fig. 4 Three classes of map projections.


Furthermore map projections are typically classified according to the distortion properties of a map. The
three distortion properties of map projections are respectively: equal-area (or equivalent), equidistant or
conformal. Equal-area projections correctly represent area sizes, equidistant map projections correctly

5|Page
Lecture 01 Navigation systems Dr. Akeel Ali Wannas

represent distances (in certain directions), while conformal map projections correctly represent angles
and shapes (of small areas).

1.3 Map coordinate systems


A map coordinate system can be created by choosing a projection and then tailoring its parameters to fit
any region on the Earth. An example is the coordinate system used in the Netherlands. It is called
Rijksdriehoekstelsel (RD). This 2D Cartesian system is based on the azimuthal stereographic projection
centred in the middle of the country and the Bessel ellipsoid is used as reference surface. The horizontal
datum, with underlying Bessel ellipsoid, is called Amersfoort datum. The origin of the coordinate
system has been shifted (false origin) from the projection centre (Amersfoort) towards the South-West to
avoid negative coordinates inside the country.

Fig. 5 The coordinate system of the Netherlands is derived from an oblique azimuthal stereographic
projection.
Standard coordinate systems have been developed to simplify the process of choosing a system. The
most important standard map coordinate system used is the Universal Transverse Mercator (UTM).
Recent years have seen that globalisation is leading to the establishment of global 3D coordinate
systems. These spatial reference systems can be realized thanks to advances in satellite-based
positioning. The most important standard 3D system for the GIS community is the International
Terrestrial Reference System (ITRS)

6|Page
Lecture 01 Navigation systems Dr. Akeel Ali Wannas

2. Coordinate systems
Different kinds of coordinates are used to position objects in a two- or three-dimensional space. Spatial
coordinates (also known as global coordinates) are used to locate objects either on the Earth’s surface in
a 3D space or on the Earth’s reference surface (ellipsoid or sphere) in a 2D space. Specific examples are
the geographic coordinates in a 2D or 3D space and the geocentric coordinates, also known as 3D
Cartesian coordinates. Planar coordinates on the other hand are used to locate objects on the flat surface
of the map in a 2D space. Examples are the 2D Cartesian coordinates and the 2D polar coordinates.
2.1 2D geographic coordinates (𝝓, 𝝀)
The most widely used global coordinate system consists of lines of geographic latitude (phi or 𝜙 or 𝜑)
and longitude (lambda or 𝜆). Lines of equal latitude are called parallels. They form circles on the surface
of the ellipsoid. Lines of equal longitude are called meridians and they form ellipses (meridian ellipses)
on the ellipsoid. Both lines form the graticule when projected onto a map plane. Note that the concept of
geographic coordinates can also be applied to a sphere as the reference surface.

Fig. (6) The latitude () and longitude () angles represent the 2D geographic coordinate system.

The latitude () of a point P (figure 2) is the angle between the ellipsoidal normal through P' and the
equatorial plane. Latitude is zero on the equator ( = 0°), and increases towards the two poles to
maximum values of  = +90 (90°N) at the North Pole and  = - 90° (90°S) at the South Pole.

7|Page
Lecture 01 Navigation systems Dr. Akeel Ali Wannas

Fig. 7 The latitude ( ) and longitude () angles and the ellipsoidal height (h) represent the 3D gegraphic
coordinate system.

The longitude () is the angle between the meridian ellipse which passes through Greenwich and the
meridian ellipse containing the point in question. It is measured in the equatorial plane from the
meridian of Greenwich ( = 0°) either eastwards through = + 180° (180°E) or westwards through  = -
180° (180°W).

Latitude and longitude represent the geographic coordinates (,) of a point P' (figure 7) with respect to
the selected reference surface. They are also called geodetic coordinates or ellipsoidal coordinates when
an ellipsoid is used to approximate the shape of the Earth. Geographic coordinates are always given in
angular units. An example, the coordinates for the Baghdad City are:
Latitude: 33.312805, Longitude: 44.361488
DMS Lat: 33° 18' 46.0980'' N, DMS Long: 44° 21' 41.3568'' E
These latitude and longitude coordinates are related to the Amersfoort datum. Note that the use of a
different reference surface will result in a different latitude and longitude.
There are several formats for the angular units of geographic coordinates. The Degrees: Minutes:
Seconds (49°30'00"N, 123°30'00"W) is the most common format, another the Decimal Degrees
(49.5000°, -123.5000°), generally with 4-6 decimal numbers. A tool for the conversion of geographic
coordinates between Degrees: Minutes: Seconds and Decimal Degrees (external link):
DMS to Decimal degrees converter

Geographic coordinates are often used to store and manage, and interchange spatial data. The data are
projected onto a local map coordinate system for editing, analysis and mapping. As example, the internal
coordinate system of Google Earth are geographic coordinates (latitude/longitude) on the World
Geodetic System of 1984 (WGS84) datum. When the data are displayed on the monitor they are
projected using the equidistant cylindrical (or simple cylindrical) map projection.
Next to the geodetic (or geographic) latitude () there are two other type of latitudes. These are the
astronomic latitude and the geocentric latitude. The astronomic latitude () (figure 8) is the angle

8|Page
Lecture 01 Navigation systems Dr. Akeel Ali Wannas

between the equatorial plane and the normal to the Geoid (i.e. a plumb line). It differs from the geodetic
(or geographic) latitude only slightly, due to the slight deviations of the Geoid from the reference
ellipsoid. The astronomic latitude is the latitude which results directly from observations of the stars,
uncorrected for vertical deflection, and applies only to positions on the Earth's surface. Astronomic
observations are used to establish local horizontal (or geodetic) datums. The geocentric latitude (') is
the angle between the equatorial plane and a line from the center of the ellipsoid (used to represent the
Earth). This value usually differs from the geodetic latitude, unless the Earth is represented as a perfect
sphere. Both geocentric and geodetic latitudes refer to the reference ellipsoid and not the Earth.

Fig. 8 Three different latitudes: the geodetic (or geographic) latitude (), the astronomic latitude () and
the geocentric latitude (').

2.2 3D geographic coordinates (, , h)


3D geographic coordinates (f, l, h) are obtained by introducing the ellipsoidal height h to the system.
The ellipsoidal height (h) of a point is the vertical distance of the point in the question above the
ellipsoid. It is measured in distance units along the ellipsoidal normal from the point to the ellipsoid
surface. 3D geographic coordinates can be used to define a position on the surface of the Earth (point P
in the figure 7).

2.3 Geocentric coordinates (X,Y,Z)


An alternative method of defining a 3D position on the surface of the Earth is by means of geocentric
coordinates (x,y,z), also known as 3D Cartesian coordinates. The system has its origin at the mass-centre
of the Earth with the X- and Y-axes in the plane of the equator. The X-axis passes through the meridian
of Greenwich, and the Z-axis coincides with the Earth's axis of rotation. The three axes are mutually
orthogonal and form a right-handed system. Geocentric coordinates can be used to define a position on
the surface of the Earth (point P in figure 9).
It should be noted that the rotational axis of the Earth changes its position over time (referred to as polar
motion). To compensate for this, the mean position of the pole in the year 1903 (based on observations
between 1900 and 1905) has been used to define the so-called 'Conventional International Origin' (CIO).

9|Page
Lecture 01 Navigation systems Dr. Akeel Ali Wannas

Fig. 9 An illustration of the geocentric coordinate system

2.4 2D Cartesian coordinates (X,Y)

A flat map has only two dimensions: width (left to right) and length (bottom to top). Transforming the
three dimensional Earth into a two-dimensional map is subject of map projections and coordinate
transformations (section 4 and 5). Here, like in several other cartographic applications, two-dimensional
Cartesian coordinates (x, y), also known as planar rectangular coordinates, are used to describe the
location of any point in a map plane, unambigiously.
The 2D Cartesian coordinate system is a system of intersecting perpendicular lines, which contains two
principal axes, called the X- and Y-axis. The horizontal axis is usually referred to as the X-axis and the
vertical the Y-axis (note that the X-axis is also sometimes called Easting and the Y-axis the Northing).
The intersection of the X- and Y-axis forms the origin. The plane is marked at intervals by equally
spaced coordinate lines, called the map grid. Giving two numerical coordinates x and y for point P, one
can now precisely and objectively specify any location P on the map.

Fig 10 An illustration of the 2D Cartesian coordinate system.

10 | P a g e
Lecture 01 Navigation systems Dr. Akeel Ali Wannas

Normally, the coordinates x = 0 and y = 0 are given to the origin. However, sometimes large positive
values are added to the origin coordinates. This is to avoid negative values for the x and y coordinates in
case the origin of the coordinate system is located inside the area of interest. The point which then has
the coordinates x = 0 and y = 0 is called the false origin.

3. Reference surfaces for mapping


Introduction
The surface of the Earth is anything but uniform. The oceans, can be treated as reasonably uniform, but
the surface or topography of the land masses exhibits large vertical variations between mountains and
valleys. These variations make it impossible to approximate the shape of the Earth with any reasonably
simple mathematical model. Consequently, two main reference surfaces have been established to
approximate the shape of the Earth. One reference surface is called the Geoid, the other reference
surface is the ellipsoid. These are illustrated in the figure below.

Fig. 11 The Earth's surface and two reference surfaces used to approximate it: the Geoid, and a reference
ellipsoid. The deviation between the Geoid and a reference ellipsoid is called geoid separation (N).

3.1 The Geoid and the vertical datum


We can simplify matters by imagining that the entire Earth’s surface is covered by water. If we ignore
tidal and current effects on this ‘global ocean’, the resultant water surface is affected only by gravity.
This has an effect on the shape of this surface because the direction of gravity - more commonly known
as plumb line - is dependent on the mass distribution inside the Earth. Due to irregularities or mass
anomalies in this distribution, the 'global ocean' results in an undulated surface. This surface is called the
Geoid. The plumb line through any surface point is always perpendicular to it.
Where a mass deficiency exists, the Geoid will dip below the mean ellipsoid. Conversely, where a mass
surplus exists, the Geoid will rise above the mean ellipsoid. These influences cause the Geoid to deviate
from a mean ellipsoidal shape by up to +/- 100 meters. The deviation between the Geoid and an ellipsoid
is called the geoid separation (N) or geoid undulation. The biggest presently known undulations are the
minimum in the Indian Ocean with N = -100 meters and the maximum in the northern part of the
Atlantic Ocean with N = +70 meters (figure 13).

11 | P a g e
Lecture 01 Navigation systems Dr. Akeel Ali Wannas

Fig. 12 The Geoid, exaggerated to illustrate the complexity of its surface.

Fig. 13 Deviations (undulations) between the Geoid and the WGS84 ellipsoid.
The Geoid is used to describe heights. In order to establish the Geoid as reference for heights, the
ocean’s water level is registered at coastal places over several years using tide gauges (mareographs).
Averaging the registrations largely eliminates variations of the sea level with time. The resulting water
level represents an approximation to the Geoid and is called the mean sea level.
The local vertical datum (or height datum) is implemented through a leveling network (figure 13 (a)
below). A leveling network consists of benchmarks, whose height above mean sea level has been
determined through geodetic leveling. The implementation of the datum enables easy user access. The
surveyors do not need to start from scratch every time they need to determine the height of a new point.
They can use the benchmark of the leveling network that is closest to the point of interest (figure 13 (b)
below).

12 | P a g e
Lecture 01 Navigation systems Dr. Akeel Ali Wannas

Fig. 13 A levelling network implements a local vertical datum: (a) network of levelling lines starting
from the Amsterdam tide-gauge, showing some of the benchmarks; (b) how the orthometric height (H)
is determined for a point, working from the nearest benchmark.

The use of satellite-based positioning equipment (e.g. GPS) to determine heights with respect to a
reference ellipsoid (e.g. WGS84) is becoming more in use. These heights are known as the ellipsoidal
heights (height h above the ellipsoid). Ellipsoidal heights have to be adjusted before they can be
compared to orthometric (mean sea level) heights. Geoid undulations (N) are used to adjust the
ellipsoidal heights (H = h - N).

Fig 14 Height h above the reference ellipsoid and height H above the Geoid for two points on the Earth
surface. The ellipsoidal height is measured orthogonal to the ellipsoid. The orthometric height is
measured orthogonal to the Geoid.
As a result of satellite gravity missions, it is currently possible to determine the orthometric height
(height H above the Geoid) with centimetre level accuracy. It is foreseeable that a global vertical datum
may become ubiquitous in the next 10-15 years. If all published maps are also using this global vertical

13 | P a g e
Lecture 01 Navigation systems Dr. Akeel Ali Wannas

datum by that time, heights will become globally comparable, effectively making local vertical datums
redundant for GIS users.

summary
Navigation definition: the science of getting ships, aircraft, or spacecraft from place to place, especially :
the method of determining position, course, and distance traveled
There are two main types of navigation are:
1. Traditional navigation methods include:
a. In marine navigation, Dead reckoning
b. In marine navigation, Pilotage involves navigating in restricted/coastal waters
c. Land navigation is the discipline of following a route through terrain on foot or by vehicle
d. Celestial navigation involves reducing celestial measurements to lines of position using
tables, spherical trigonometry, and almanacs.
2. Electronic navigation covers any method of position fixing using electronic means, including:
a. Radio navigation uses radio waves to determine the position
b. Radar navigation uses radar to determine the distance from or bearing of objects whose
position is known
c. Satellite navigation uses a Global Navigation Satellite System (GNSS)

Geometric Aspects of mapping


In the process of map-making ellipsoidal or spherical surfaces are used to represent the surface of the
Earth. These curved reference surfaces are then projected on a map formed into a cylinder, cone, or flat
plane

Reference surfaces
Two main reference surfaces (or Earth figures) are used to approximate the shape of the Earth. One is
called the ellipsoid, the other is the Geoid. The Geoid is the equipotential surface at mean sea level and
is used for measuring heights represented on maps. The starting point for measuring these heights is
mean sea level points established at coastal places.

Map projection is a way to flatten a globe's surface into a plane in order to make a map. This requires a
systematic transformation of the latitudes and longitudes of locations from the surface of the globe into
locations on a plane.

Map projections are typically classified according to the geometric surface from which they are derived:
cylinder, cone or plane. The three classes of map projections are respectively cylindrical, conical and
azimuthal.

14 | P a g e
Lecture 01 Navigation systems Dr. Akeel Ali Wannas

Map coordinate system can be created by choosing a projection and then tailoring its parameters to fit
any region on the Earth.

Coordinate systems
a. 2D geographic coordinates (ϕ,λ) (either of the two lines of latitude and longitude whose
intersection determines the geographical point of a place)
b. 3D geographic coordinates (ϕ,λ, h)
c. Geocentric coordinates (X,Y,Z)
d. 2D Cartesian coordinates (X,Y)

Reference surfaces for mapping


vertical datum is used for measuring the elevations of points on the Earth's surface.
The vertical datum is sometimes referred to the Land Leveling Datum and is based upon the
measurement of mean sea level (MSL).
Different height systems:
Orthometric height (H) is the height of Earth’s surface above Geoid.
Geoid Height (N) is the height of the Geoid above ellipsoid
Ellipsoidal height (h) is the height of the Earth’s surface above the ellipsoid. GPS uses ellipsoidal
heights.
The relationship between orthometric height (H) and ellipsoidal height (N) is given by H = h - N

15 | P a g e
Navigation systems

Lectuerer

Dr. Akeel Ali Wannas

2021-2022
Lecture 02 Navigation systems Dr. Akeel Ali Wannas

1. Introduction to Global Navigation Satellite System (GNSS)


A satellite navigation or satnav system is a system that uses satellites to provide autonomous geo-spatial
positioning. It allows small electronic receivers to determine their location (longitude, latitude, and
altitude/elevation) to high precision (within a few centimeters to metres) using time signals transmitted
along a line of sight by radio from satellites. The system can be used for providing position, navigation
or for tracking the position of something fitted with a receiver (satellite tracking). The signals also allow
the electronic receiver to calculate the current local time to high precision, which allows time
synchronization.
Global coverage for each system is generally achieved by a satellite constellation of 18–30 medium
Earth orbit (MEO) satellites spread between several orbital planes. The actual systems vary, but use
orbital inclinations of >50° and orbital periods of roughly twelve hours (at an altitude of about 20,000
kilometres or 12,000 miles).

Fig.1 GPS Satellites flying orbital flight around Earth.

2. What is GNSS?
Global Navigation Satellite System (GNSS) is the standard generic term for all navigation satellites
systems like GPS, GLONASS, GALILEO, BeiDou, QZSS, NAVIC.
Global Constellation Regional Constellation
•GPS USA •QZSS, Japan
•GLONASS, Russia •NAVIC (IRNSS), India
•Galileo, Europe
•BeiDou(COMPASS), China

3. GPS Components
GPS is a three-part system that includes:

1|Page
Lecture 02 Navigation systems Dr. Akeel Ali Wannas

a) Space Segment – They serve like stars in the constellation.


• GPS satellites fly in circular orbits at an altitude of 20,200 km and with a period of 12 hours.
• Powered by solar cells.
• The satellites continuously orient themselves to point their solar panels toward the sun and their
antenna toward the earth.
• Orbits are designed so that, at least, six satellites are always within line of sight from any location
on the planet.
b) Control Segments– They monitor and control satellites. Ground stations also identify their
location.
• Master Control System
• Monitor Stations
• Ground Antennas
c) User Segment – Receivers are constantly listening for signals from the satellites. Highly advanced
receivers can even identify the exact location within a fraction of an inch.

4. How GPS works?


GPS, as intricate as it may look, works based on a simple Mathematical Process named 3-dimensional
Trilateration. To fully understand 3 dimensional Trilateration, it is important to have a basic knowledge
of its prerequisite, namely 2-D Trilateration.

1. 2-D Trilateration
Its simply the use of circles in order to locate an object. Here’s an example (figures not to scale):

2|Page
Lecture 02 Navigation systems Dr. Akeel Ali Wannas

You are told that there is an imaginary Basketball on the court. You need to find the exact location of
that Ball based on the information provided.
1st clue: ball is located 10m from the hoop.
This being said, the ball can be located anywhere, in any direction 10m away from the hoop. So the ball
is somewhere on the green line

2nd clue: ball located 30m from point A


The ball can now be located at two different points, namely the 2 points where the green and red
circle(rest of circle purposely cut off) overlap.

3rd clue is thus needed to know the exact location of the ball. You are told that the ball is 25 m
from point B.

3|Page
Lecture 02 Navigation systems Dr. Akeel Ali Wannas

The intersection of these 3 circles corresponds to the exact location of the basketball. In other words, it’s
the only location where the ball would be simultaneously 10m from the hoop, 30m from point A and
25 m from point B.
Through the use of 3 circles, or , in other words by using 2-D Trilateration, we found the exact location
of our imaginary basketball.

2. 3-D Trilateration
3 dimensional Trilateration works the same way, except that the circles are replaced by spheres. Here’s
how 3-D Trilateration is used in the Global Positioning System to locate the exact position of an object.
As you read earlier, the GPS receiver receives and interpret radio waves emitted by at least 4 Satellites.
Similarly to 2-D Trilateration, the intersection between those spheres will determine the exact location
of the GPS receiver.

The GPS receiver receives signal from 1st Satellite, calculates the distance between itself and the
satellite. Lets say the receiver is 20 000 km from the satellite. This means that the receiver could be at
any point in any direction on the surface of the green sphere with a radius of 20 000 km.

4|Page
Lecture 02 Navigation systems Dr. Akeel Ali Wannas

The receiver captures another signal from Satellite 2. After calculating the distance, we notice that the
two sphere’s overlap to form a perfect circle. The receiver could be anywhere on the edge of that circle.

he receiver captures radio waves from a third Satellite. We notice that the 3 spheres overlap at 2 points.
These are the 2 possible locations of the GPS receiver.

A fourth signal received from a fourth Satellite determines the exact location of the GPS receiver. This
location corresponds to the sole point of overlap of the 4 spheres.
As you just saw, a simple mathematical process is what makes possible this amazing technology.

5|Page
Lecture 02 Navigation systems Dr. Akeel Ali Wannas

3. What are Radio Waves?


Radio waves are low energy electromagnetic waves used for long distance communication. Like all
electromagnetic waves, Radio waves travel at the speed of light or at 300 000 km/s.
to understand the following explanations, it is important to know this GOOD, OLD equation!
𝑑 = 𝑣𝑡
How is the distance calculated?
When you turn on your GPS receiver, it starts receiving the code emitted by the Satellite. However, the
distance the wave has to travel(from Satellite to Earth) creates a slight delay in the reception of the code
by the GPS receiver. In other words, the two codes are still identical but the one emitted by the Satellite
will lag slightly behind the one emitted by the receiver.

This slight incongruity between the two identical codes is calculated to eventually find the time needed
for the code emitted by the Satellite to reach the receiver.
Once the delay calculated through the use of a highly sophisticated atomic clock, the receiver multiplies
the time by the speed of light:
𝑑 = 𝑣𝑡
and subsequently finds its exact distance from the satellite.
Example:
What is the distance between Satellite X and receiver Y if the time needed for the radio waves emitted
by Satellite X to reach receiver Y is calculated by the GPS receiver to be 0.09 seconds?
The first step is to identify the variables we know in the equation d = vt
d= ? t= 0.09 s v= speed of light = 300 000 km/s
-The next step consists of solving for d.
d = 300 000 km/s( 0.09 s ) = 27 000

6|Page
Lecture 02 Navigation systems Dr. Akeel Ali Wannas

5. GPS signals
GPS signals include ranging signals, used to measure the distance to the satellite, and navigation
messages. The navigation messages include ephemeris data, used to calculate the position of each
satellite in orbit, and information about the time and status of the entire satellite constellation, called the
almanac.
5.1. Pseudorandom Noise (PRN) codes
Pseudorandom noise (PRN) codes are an important element of code division multiple access (CDMA)
based satellite navigation systems. Each satellite within a GNSS constellation has a unique PRN code
that it transmits as part of the C/A navigation message. This code allows any receiver to identify exactly
which satellite(s) it is receiving.

The PRN codes act as spreading codes in the spread-spectrum communications system, and must be
carefully chosen to minimise interference between each satellite signal. Failure to do so would leave the
system open to so-called CDMA noise, potentially degrading performance to unworkable levels.

It is not only satellites that are allocated PRN codes, they are also necessary for augmentation systems
and pseudolites. Therefore, the PRN codes for each GNSS have to be carefully managed.

In the GPS system, this management is performed by the GPS Directorate, which has already defined a
large set of GPS PRN sequences that provide good auto- and cross-correlation properties. Operators of
augmentation systems and other pseudolites must then apply to the GPS Directorate to be allocated one
of the codes from this sequence.

7|Page
Lecture 02 Navigation systems Dr. Akeel Ali Wannas

5.2. Navigation Message


Every satellite receives from the ground antennas the navigation data which is sent back to the users
through the navigation message. The Navigation Message provides all the necessary information to
allow the user to perform the positioning service. It includes the Ephemeris parameters, needed to
compute the satellite coordinates with enough accuracy, the Time parameters and Clock Corrections, to
compute satellite clock offsets and time conversions, the Service Parameters with satellite health
information (used to identify the navigation data set).
5.3. Carrier wave
In telecommunications, a carrier wave, carrier signal, or just carrier, is a waveform (usually sinusoidal)
that is modulated (modified) with an information-bearing signal for the purpose of conveying
information. This carrier wave usually has a much higher frequency than the input signal does. The
purpose of the carrier is usually either to transmit the information through space as an electromagnetic
wave (as in radio communication), or to allow several carriers at different frequencies to share a
common physical transmission medium by frequency division multiplexing (as in a cable television
system). The term originated in radio communication, where the carrier wave creates the waves which
carry the information (modulation) through the air from the transmitter to the receiver.

5.4. The Mathematical Model


For concreteness, consider a ship at sea in an unknown location. It has a GPS receiver that obtains
simultaneous signals from four satellites. Each signal specifies its transmission time and the position of
the satellite at that time. This allows the GPS receiver to compute its position and time. To begin with,
we imagine that there is a xyz-coordinate system with the earth centered at the origin, the positive z axis
running through the North Pole and fixed relative to the earth. The unknown position of the ship can be
expressed as a point (𝑥, 𝑦, 𝑧), which can later be translated into a latitude and longitude. To simplify
things, let us mark off the three axes in units equal to the radius of the earth. Thus, a point at sea level
will have
𝑥2 + 𝑦2 + 𝑧2 = 1
In this system. Also, we will measure time in units of milliseconds. The GPS system finds distances by
knowing how long it takes a radio signal to get from one point to another. For this we need to know the
speed of light, approximately equal to .047 (in units of earth radii per millisecond). Our ship is at an
unknown position and has no clock. It receives simultaneous signals from four satellites, giving their
positions and times as shown in Table 1. (These numbers were made up for the example; in a real case
the satellite positions would not be such simple vectors

Let (x, y, z) be the ship’s position and t the time when the signal arrives. Our goal is to determine the
value of these variables. Using the data from the fourth satellite, we can compute the distance from the

8|Page
Lecture 02 Navigation systems Dr. Akeel Ali Wannas

ship as follows. The signal was sent at time 19.9 and arrived at time t. traveling at a speed of .047, that
makes the distance
𝑑 = 0.047(𝑡 − 19.9)
This same distance can be expressed in terms of (𝑥, 𝑦, 𝑧) and the satellite’s position (1, 2, 0)

𝑑 = √(𝑥 − 1)2 + (𝑦 − 2)2 + (𝑧 − 0)2


Combining these results leads to the equation
(𝑥 − 1)2 + (𝑦 − 2)2 + (𝑧 − 0)2 = 0.0472 (𝑡 − 19.9)2
Similarly, we can derive a corresponding equation for each of the other three satellites. That gives us
four equations in four unknowns, and so we can solve for 𝑥, 𝑦, 𝑧 𝑎𝑛𝑑 𝑡. These are not linear equations,
but we can use algebra to obtain a linear system that we can solve.
Expanding all the squares and rearranging leads to:
2x + 4y − 2(0.0472 )(19.9)t = 12 + 22 − 0. 0472 (19.9)2 + x 2 + y 2 + z 2 − 0.0472 t 2

Similar equations can be derived from the three other satellites. Writing all four equations together gives
2𝑥 + 4𝑦 + 0𝑧 − 2(0.0472 )(19.9)𝑡 = 12 + 22 + 02 − 0. 0472 (19.9)2 + 𝑥 2 + 𝑦 2 + 𝑧 2 − 0.0472 𝑡 2

4𝑥 + 0𝑦 + 4𝑧 − 2(0.0472 )(2.4)𝑡 = 22 + 02 + 22 − 0.0472 (2.42)2 + 𝑥 2 + 𝑦 2 + 𝑧 2 − 0.0472 𝑡 2

2𝑥 + 2𝑦 + 2𝑧 − 2(0.0472 )(32.6)𝑡 = 12 + 12 + 12 − 0.0472 (32.6)2 + 𝑥 2 + 𝑦 2 + 𝑧 2 − 0.0472 𝑡 2

4𝑥 + 2𝑦 + 0𝑧 − 2(0.0472 )(19.9)𝑡 = 22 + 12 + 02 − 0.0472 (19.9)2 + 𝑥 2 + 𝑦 2 + 𝑧 2 − 0.0472 𝑡 2

The quadratic terms in all the equations are the same, so by subtracting the first equation from the other
three, we obtain a system of three linear equations:
2𝑥 − 4𝑦 + 4𝑧 + 2(0.0472 )(17.5)𝑡 = 8 − 5 + 0. 0472 (19.92 − 2.42 )

0x − 2y + 2z − 2(0.0472 )(12.7)t = 3 − 5 + 0.0472 (19.92 − 32.62 )

2x − 2y + 0z + 2(0. 0472 )(0)t = 5 − 5 + 0 . 0472 (19.92 − 19.92 )

By deriving the general solution, it will be possible to express three of the unknowns in terms of the
fourth. Then, substitution in one of the original quadratic equations will produce a quadratic equation in
one variable. Solving that in turn will lead, in turn, to values for the other three variables. So, proceeding
according to this plan, we formulate the linear systems an augmented matrix:

2 −4 4 .077 3.86
0 −2 2 −.056 −3.47
2 −2 0 0 0
The reduced row echelon form for the matrix is:
R3=R3-R1; R1=R1/2;
+1.0000 − 2.0000 + 2.0000 + 0.0385 + 1.9300
[+0.0000 − 2.0000 + 2.0000 − 0.0560 − 3.4700]
+0.0000 + 2.0000 − 4.0000 − 0.0770 − 3.8600

9|Page
Lecture 02 Navigation systems Dr. Akeel Ali Wannas

R3=R3+R2 ; R2=R2/-2;
+1.0000 − 2.0000 + 2.0000 + 0.0385 + 1.9300
[+0.0000 + 1.0000 − 1.0000 + 0.0280 + 1.7350]
+0.0000 + 0.0000 − 2.0000 − 0.1330 − 7.3300
R3=R3/-2;
+1.0000 − 2.0000 + 2.0000 + 0.0385 + 1.9300
[+0.0000 + 1.0000 − 1.0000 + 0.0280 + 1.7350]
+0.0000 + 0.0000 + 1.0000 + 0.0665 + 3.6650
R2=R2+R3; R1=R1-2*R3
+1.0000 − 2.0000 + 0.0000 − 0.0945 − 5.4000
[ +0.0000 + 1.0000 + 0.0000 + 0.0945 + 5.4000 ]
+0.0000 + 0.0000 + 1.0000 + 0.0665 + 3.6650
R1=R1+2*R2
+1.0000 + 0.0000 + 0.0000 + 0.0945 + 5.4000
[+0.0000 + 1.0000 + 0.0000 + 0.0945 + 5.4000]
+0.0000 + 0.0000 + 1.0000 + 0.0665 + 3.6650

1 0 0 . 095 5.40
0 1 0 . 095 5.40
0 0 1 . 067 3.67

𝑥 + 0.095𝑡 = 5.4
𝑦 + 0.095𝑡 = 5.4
𝑧 + 0.067𝑡 = 3.67
Therefore, the general solution yields:
𝑥 = 5.4 − .095𝑡, 𝑦 = 5.4 − .095𝑡, 𝑧 = 3.67 − .067𝑡, 𝑡 𝑖𝑠 𝑓𝑟𝑒𝑒

Returning to (1), and substituting the above expressions for 𝑥, 𝑦, 𝑧, we obtain


(5.4 − .095t − 1)2 + (5.4 − .095t − 2)2 + (3.67 − .067t)2 = .0472 (t − 19.9)2

(4.4 − 0.095t )2 + (3.4 − 0.095t )2 + (3.67 − .067t)2 = .0472 (t − 19.9)2

Or
0.02t 2 − 1.88t + 43.56 = 0,

𝑡1 =41.43𝑚𝑠 ; 𝑡2 =52.57𝑚𝑠
leading to two solutions, 41.43 and 52.57. If we select the first selection, then (𝑥, 𝑦, 𝑧) =
(1.317, 1.317, 0.790), which has a length of about 2. We are using units of earth radii, so this point is
around 4000 miles above the surface of the earth. The second value of t leads to (𝑥, 𝑦, 𝑧) =
(.667, .667, .332), with length 0.9997. That places the point on the surface of the earth and gives the
location of the ship. Of course to use the information, we would want to convert it to latitude and
10 | P a g e
Lecture 02 Navigation systems Dr. Akeel Ali Wannas

longitude.1 Dr Kalman provides a nice example in the use of linear algebra for solving a system.
However as he noted, it is not the complete story. There are many other refinements that must be taken
into account to provide for the accuracies expected. I’ll leave you with some example of the complexity
of the actual process.
Although GPS in an extreme advancement in navigational technology, it has its limitations. Sources of
GPS signal errors: Factors that can degrade the GPS signal and thus affect accuracy include the
following:
1. Ionosphere and troposphere delays - The satellite signal slows as it passes through the atmosphere.
The GPS system uses a built-in model that calculates an average amount of delay to partially correct for
this type of error.
2. Signal multipath - This occurs when the GPS signal is reflected off objects such as tall buildings or
large rock surfaces before it reaches the receiver. This increases the travel time of the signal, thereby
causing errors.
3. Receiver clock errors - A receiver’s build-in clocks is not as accurate as the atomic clocks onboard
the GPS satellites. Therefore, it may have very slight timing errors.
4. Orbital errors - Also known asephemeris errors, these are inaccuracies of the satellite’s reported
location.
5. Number of satellites visible-The more satellites a GPS receiver can ”see,”, the better the accuracy.
Buildings, terrain, electronic interference, or sometimes even dense foliage can block signal reception
and thereby cause position errors or possibly no position reading at all. GPS units typically will not work
indoors, underwater or underground.
6. Satellite geometry/shading-This refers to the relative position of the satellites at any given time.
Ideal satellite geometry exists when the satellites are located at wide angles relative to each other. Poor
geometry results when the satellites are located in a line or in a tight grouping.

11 | P a g e
Lecture 02 Navigation systems Dr. Akeel Ali Wannas

Summary
1.Global Navigation Satellite System (GNSS): is the standard generic term for all navigation satellites
systems like GPS, GLONASS, GALILEO, BeiDou, QZSS, NAVIC. There are two typs
1.Global Constellation 2.Regional Constellation

2.GPS Components
GPS is a three-part system that includes:
a) Space Segment – They serve like stars in the constellation.
• GPS satellites fly in circular orbits at an altitude of 20,200 km and with a period of 12 hours.
• Powered by solar cells.
• The satellites continuously orient themselves to point their solar panels toward the sun and their
antenna toward the earth.
• Orbits are designed so that, at least, six satellites are always within line of sight from any location on
the planet.
b) Control Segments– They monitor and control satellites. Ground stations also identify their location.
• Master Control System
• Monitor Stations
• Ground Antennas
c) User Segment – Receivers are constantly listening for signals from the satellites. Highly advanced
receivers can even identify the exact location within a fraction of an inch.
3.How GPS works?
GPS, as intricate as it may look, works based on a simple Mathematical Process named 3-dimensional
Trilateration. To fully understand 3 dimensional Trilateration.

4.What are Radio Waves?


Radio waves are low energy electromagnetic waves used for long distance communication. Like all
electromagnetic waves, Radio waves travel at the speed of light or at 300 000 km/s.
to understand the following explanations, it is important to know this GOOD, OLD equation!
𝑑 = 𝑣𝑡

12 | P a g e
Lecture 02 Navigation systems Dr. Akeel Ali Wannas

5.GPS signals
GPS signals include ranging signals, used to measure the distance to the satellite, and navigation
messages. The navigation messages include 1)ephemeris data, used to calculate the position of each
satellite in orbit, and 2)information about the time and status of the entire satellite constellation, called
the almanac.
The radio wave of navigation message consists of
1)Pseudorandom Noise (PRN) codes; 2)Navigation Message and 3) Carrier wave

6.The Mathematical Model


See the sixth paragraph of this lecture
7.Sources of GPS signal errors:
1. Ionosphere and troposphere delays - The satellite signal slows as it passes through the atmosphere.
The GPS system uses a built-in model that calculates an average amount of delay to partially correct for
this type of error.
2. Signal multipath - This occurs when the GPS signal is reflected off objects such as tall buildings or
large rock surfaces before it reaches the receiver. This increases the travel time of the signal, thereby
causing errors.
3. Receiver clock errors - A receiver’s build-in clocks is not as accurate as the atomic clocks onboard the
GPS satellites. Therefore, it may have very slight timing errors.
4. Orbital errors - Also known asephemeris errors, these are inaccuracies of the satellite’s reported
location.
5. Number of satellites visible-The more satellites a GPS receiver can ”see,”, the better the accuracy.
Buildings, terrain, electronic interference, or sometimes even dense foliage can block signal reception
and thereby cause position errors or possibly no position reading at all. GPS units typically will not work
indoors, underwater or underground.
6. Satellite geometry/shading-This refers to the relative position of the satellites at any given time. Ideal
satellite geometry exists when the satellites are located at wide angles relative to each other. Poor
geometry results when the satellites are located in a line or in a tight grouping.

13 | P a g e
Navigation systems

Lectuerer

Dr. Akeel Ali Wannas

2021-2022
Lecture 03 Navigation systems Dr. Akeel Ali Wannas

Calculate distance, bearing and more between


Latitude/Longitude points

1. Introduction
This lecture presents a variety of calculations for latitude/longitude points, with the formulas. All these
formulas are for calculations on the basis of spherical earth (ignoring ellipsoidal effects) – which is
accurate enough* for most purposes… [In fact, the earth is very slightly ellipsoidal; using a spherical
model gives errors typically up to 0.3%.

2. Distance between tow points


This uses the ‘haversine’ formula to calculate the great-circle distance between two points – that is, the
shortest distance over the earth’s surface – giving an ‘as the bird flies’ distance between the points
(ignoring any hills they fly over, of course!).
Haversine formula:
a = sin²(Δφ/2) + cos φ1 ⋅ cos φ2 ⋅ sin²(Δλ/2)

c = 2 ⋅ atan2 ( √a, √(1 − a) )

1|Page
Lecture 03 Navigation systems Dr. Akeel Ali Wannas

d = R ⋅ c
Δφ = φ2 − φ1 ; Δλ = λ2 − λ1

where φ is latitude, λ is longitude(in radians), R is earth’s radius (mean radius = 6,371,000m);


note that angles need to be in radians to pass to trig functions!

atan2:
The function atan2 computes the principal value of the argument function
applied to the complex number x + i y.
In terms of the standard arctan function, whose range is (−π/2, π/2], it can be
expressed as follows:

The haversine formula remains particularly well-conditioned for numerical computation even at small
distances – unlike calculations based on the spherical law of cosines.

3. Spherical Law of Cosines


In fact, most modern computers & languages use 64-bit floating-point numbers, which provide 15
significant figures of precision. By my estimate, with this precision, the simple spherical law of cosines
formula (cos 𝑐 = cos 𝑎 cos 𝑏 + sin 𝑎 sin 𝑏 cos 𝑐 𝐶) gives well-condi-tioned results down to distances as
small as a few metres on the earth’s surface. (Note that the geodetic form of the law of cosines is
rearranged from the canonical one so that the latitude can be used directly, rather than the colatitude).
This makes the simpler law of cosines a reasonable 1-line alternative to the haversine formula for many
geodesy purposes (if not for astronomy). The choice may be driven by programming language,
processor, coding context, available trig func-tions (in different languages), etc – and, for very small
distances an equirectangular approxima-tion may be more suitable.
d = acos( sin φ1 ⋅ sin φ2 + cos φ1 ⋅ cos φ2 ⋅ cos Δλ ) ⋅ R

2|Page
Lecture 03 Navigation systems Dr. Akeel Ali Wannas

4. Equirectangular approximation
If performance is an issue and accuracy less important, for small distances Pythagoras’ theorem can be
used on an equi-rectangular projec-tion:
𝑥 = 𝛥𝜆 ⋅ 𝑐𝑜𝑠 𝜑𝑚
𝑦 = 𝛥𝜑
𝜑1 + 𝜑2
𝜑𝑚 =
2

𝑑 = 𝑅 ⋅ √𝑥² + 𝑦²

5. Bearing
In general, your current heading will vary as you follow a great circle path (orthodrome); the final
heading will differ from the initial heading by varying degrees according to distance and latitude (if you
were to go from say 35°N,45°E (≈ Baghdad) to 35°N,135°E (≈ Osaka), you would start on a heading of
60° and end up on a heading of 120°!).
This formula is for the initial bearing (sometimes referred to as forward azimuth) which if followed in a
straight line along a great-circle arc will take you from the start point to the end point:
𝜃 = 𝑎𝑡𝑎𝑛2( 𝑠𝑖𝑛 𝛥𝜆 ⋅ 𝑐𝑜𝑠 𝜑2 , 𝑐𝑜𝑠 𝜑1 ⋅ 𝑠𝑖𝑛 𝜑2 − 𝑠𝑖𝑛 𝜑1 ⋅ 𝑐𝑜𝑠 𝜑2 ⋅ 𝑐𝑜𝑠 𝛥𝜆 )
𝑤ℎ𝑒𝑟𝑒 𝜑1 , 𝜆1 𝑖𝑠 𝑡ℎ𝑒 𝑠𝑡𝑎𝑟𝑡 𝑝𝑜𝑖𝑛𝑡, 𝜑2 , 𝜆2 𝑡ℎ𝑒 𝑒𝑛𝑑 𝑝𝑜𝑖𝑛𝑡 (𝛥𝜆 𝑖𝑠 𝑡ℎ𝑒 𝑑𝑖𝑓𝑓𝑒𝑟𝑒𝑛𝑐𝑒 𝑖𝑛 𝑙𝑜𝑛𝑔𝑖𝑡𝑢𝑑𝑒)

6. Midpoint
This is the half-way point along a great circle path between the two points.
𝐵𝑥 = 𝑐𝑜𝑠 𝜑2 ⋅ 𝑐𝑜𝑠 𝛥𝜆
𝐵𝑦 = 𝑐𝑜𝑠 𝜑2 ⋅ 𝑠𝑖𝑛 𝛥𝜆

𝜑𝑚 = 𝑎𝑡𝑎𝑛2 (sin 𝜑1 + sin 𝜑2 , √(cos 𝜑1 + 𝐵𝑥 )2 + 𝐵𝑦2 )

𝜆𝑚 = 𝜆1 + 𝑎𝑡𝑎𝑛2(𝐵𝑦 , 𝑐𝑜𝑠(𝜑1 ) + 𝐵𝑥 )

3|Page
Lecture 03 Navigation systems Dr. Akeel Ali Wannas

H.W

If the starting point of the journey is from Baghdad (𝜑1 : 33.310116, 𝜆1 : 44.404674 ) to Cairo (𝜑2 :
30.052798, 𝜆2 :31.238379 ), find the following:
1. Distance.
2. Initial bearing.
3. Midpoint:

4|Page
Navigation systems

Lectuerer

Dr. Akeel Ali Wannas

2021-2022
Lecture 04 Navigation systems Dr. Akeel Ali Wannas

Inertial Navigation System INS

An inertial navigation system is an autonomous system that provides information about position,
velocity and attitude based on the measurements by inertial sensors and applying the dead reckoning
(DR) principle. DR is the determination of the vehicle’s current position from knowledge of its previous
position and the sensors measuring accelerations and angular rotations. Given specified initial
conditions, one integration of acceleration provides velocity and a second integration gives position.
Angular rates are processed to give the attitude of the moving platform in terms of pitch, roll and yaw,
and also to transform navigation parameters from the body frame to the local-level frame.

1. Principle of Inertial Navigation


The principle of inertial navigation is based upon Newton’s first law of motion, which states
A body continues in its state of rest, or uniform motion in a straight line unless it is compelled to change
that state by forces impressed on it.
Simply, this law says that a body at rest tends to remain at rest and a body in motion tends to remain in
motion unless acted upon by an outside force. The full meaning of this is not easily visualized in the
Earth’s reference frame. For it to apply, the body must be in an inertial reference frame (a non-rotating
frame in which there are no inherent forces such as gravity).
Newton’s second law of motion shares importance with his first law in the inertial navigation system,
and states
Acceleration is proportional to the resultant force and is in the same direction as this force.
This can be expressed mathematically as
𝐹 = 𝑚𝑎 (1)
where
𝐹 is the force
𝑚 is the mass of the body
𝑎 is the acceleration of the body due to the applied force 𝐹.
The physical quantity pertinent to an inertial navigation system is acceleration because both velocity 𝑣
and displacement 𝑠 can be derived from acceleration by the process of integration. Conversely, velocity
and acceleration can be estimated by differentiation from displacement, written mathematically
𝑑𝑠 𝑑𝑣 𝑑2 𝑠
𝑣 = 𝑑𝑡 ; 𝑎 = = 𝑑𝑡 2 (2)
𝑑𝑡

Differentiation is the process of determining how one physical quantity varies with respect to another.
Integration, the inverse of differentiation, is the process of summing all rate-of-change that occurs
within the limits being investigated, which can be written mathematically as
𝑣 = ∫ 𝑎𝑑𝑡 ; 𝑠 = ∫ 𝑣𝑑𝑡 = ∬ 𝑎𝑑𝑡𝑑𝑡 (3)

1|Page
Lecture 04 Navigation systems Dr. Akeel Ali Wannas

An inertial navigation system is an integrating system consisting of a detector and an integrator. It


detects acceleration, integrates this to derive the velocity and then integrates that to derive the
displacement.

2. Physical Implementation of an INS


There are two implementation approaches to an INS: (1) a stable platform system also known as a
gimbaled system, and (2) a strapdown system. The components of these systems are shown in Fig. 1. In
the stable platform, the inertial sensors are mounted on a set of gimbals such that the platform always
remains aligned with the navigation frame. This is done by having a set of torque motors rotate the
platform in response to rotations sensed by the gyroscopes. Thus the output of the accelerometers is
directly integrated for velocity and position in the navigation frame. Since gimbaled systems are
mechanically complex and expensive, their use is limited.

Fig. 1. Arrangement of the components of a gimbaled IMU (left) and a strapdown IMU (right)

Table 1 Comparison of gimbaled platform and strapdown navigation systems


Characteristics Strapdown systems Gimbaled systems
Size Relatively small Bigger
Weight Relatively lighter Heavy
Performance High accuracy Superior performance
Robustness Highly reliable, immune to shocks and High reliability, low immunity to
vibrations shocks and vibrations

Advances in electronics gave rise to strapdown systems. In these, the inertial sensors are rigidly
mounted onto the body of the moving platform and the gimbals are replaced by a computer that
simulates the rotation of the platform by software frame transformation. Rotation rates measured by the
gyroscopes are applied to continuously update the transformation between the body and navigation
frames. The accelerometer measurements are then passed through this transformation to obtain the
acceleration in the navigation frame. Strapdown systems are favored for their reliability, flexibility, low
power usage, being lightweight and less expensive than stable platforms. The transition to strapdown
systems was facilitated by the introduction of optical gyros to replace rotor gyros, and by the rapid
development of the processor technology required to perform the computations. Table 1 gives a
comparison of the major characteristics of the two systems.

2|Page
Lecture 04 Navigation systems Dr. Akeel Ali Wannas

An INS can be thought of as consisting of three principal modules: an inertial measurement unit (IMU),
a pre-processing unit, and a mechanization module. An IMU uses three mutually orthogonal
accelerometers and three mutually orthogonal gyroscopes. The signals are pre-processed by some form
of filtering to eliminate disturbances prior to the mechanization algorithm which converts the signals
into positional and attitude information. The three major modules of an INS are shown in Fig. 2.

Fig. 2. The principal modules of an inertial navigation system


3. Inertial Measurement Unit
The measurements of the acceleration and the rotation of the vehicle are made by a suite of inertial
sensors mounted in a unit called the inertial measurement unit (IMU). This holds two orthogonal sensor
triads, one with three accelerometers and the other with three gyroscopes. Accelerometers measure
linear motion in three mutually orthogonal directions, whereas gyroscopes measure angular motion in
three mutually orthogonal directions. Nominally, the axes of these two triads are parallel, sharing the
origin of the accelerometer triad. The sensor axes are fixed in the body of the IMU, and are therefore
called the body axes or body frame. Apart from the inertial sensors, the IMU also contains related
electronics to perform selfcalibration, to sample the inertial sensor readings and then to convert them
into the appropriate form for the navigation equipment and algorithms. Fig. 3. shows the components of
a typical IMU.

Fig. 3. The components of a typical inertial measurement unit (IMU)


4. Inertial Sensors
A brief description of the two main kinds of inertial sensors, accelerometers and gyroscopes, now
follows.
4.1. Accelerometers
An accelerometer consists of a proof mass, m, connected to a case by a pair of springs as shown in Fig.
4. In this case the sensitive axis of the accelerometer is along the spring in the horizontal axis.

3|Page
Lecture 04 Navigation systems Dr. Akeel Ali Wannas

Acceleration will displace the proof mass from its equilibrium position, with the amount of
displacement proportional to the acceleration. The displacement from the equilibrium position is sensed
by a pickoff and is then scaled to provide an indication of acceleration along this axis. The equilibrium
position is calibrated for zero acceleration. Acceleration to the right will cause the proof mass to move
left in relation to the case and (as shown by the scale) indicates positive acceleration.

Fig. 4 (a) An accelerometer in the null position with no force acting on it, (b) the same accelerometer
measuring a linear acceleration of the vehicle in the positive direction (to the right)
If the accelerometer is stood on a bench with its sensitive axis vertical in the presence of a gravitational
field, the proof mass will be displaced downward with respect to the case, indicating positive
acceleration. The fact that the gravitational acceleration is downward, in the same direction as the
displacement as shown in Fig. 5, is sometimes a cause of confusion for beginners in navigation.

Fig. 5 An accelerometer resting on a bench with Fig. 6 An accelerometer resting on a bench where
gravitational acceleration acting on it reaction to the gravitational acceleration is acting
on it
The explanation for this lies in the equivalence principle, according to which, in the terrestrial
environment it is not possible to separate inertia and navigation by the accelerometer measurement in a
single point. Therefore, the output of an accelerometer due to a gravitational field is the negative of the
field acceleration. The output of an accelerometer is called the specific force and is given by
𝑓 =𝑎−g (4)

4|Page
Lecture 04 Navigation systems Dr. Akeel Ali Wannas

where
𝑓 is the specific force
𝑎 is the acceleration with respect to the inertial frame
g is the gravitational acceleration which is +9.8 𝑚/𝑠 2 .
It is this which causes confusion. The easy way to remember this relation is to think of one of two cases.
If the accelerometer is sitting on a bench it is at rest so acceleration a is zero. The force on the
accelerometer is the force of reaction of the bench against the case, which is the negative of g along the
positive (upward) direction and therefore causes the mass to move downward (Fig. 6).
Or imagine dropping the accelerometer in a vacuum. In this case, the specific force read by the
accelerometer f is zero and the actual acceleration is 𝑎 = g. To navigate with respect to the inertial
frame we need a, therefore in the navigation equations, we convert the output of an accelerometer from f
to a by adding g.
4.1.1. Accelerometer Measurements
An accelerometer measures translational acceleration (less the gravity component) along its sensitive
axis typically by sensing the motion of a proof mass relative to the case. From Eq. (4) the output of an
accelerometer triad is
𝒇=𝒂−𝐠 (4)
where 𝒇 is the specific force vector, 𝒂 is the acceleration vector of the body, and 𝐠 is the gravitational
vector. The acceleration a can be expressed as the double derivative of the position vector 𝒓, as
𝑑2𝒓
𝒂= | = 𝒓̈ (6)
𝑑𝑡 2 𝑖

4.2. Gyroscopes
To fully describe the motion of a body in 3-D space, rotational motion, as well as translational motion,
must be measured. Sensors that measure angular rates with respect to an inertial frame of reference are
called gyroscopes. If the angular rates are mathematically integrated this will provide the change in
angle with respect to an initial reference angle. Traditionally, these rotational measurements are made
using the angular momentum of a spinning rotor. The gyroscopes either output angular rate or attitude
depending upon whether they are of the rate sensing or rate integrating type. It is customary to use the
word gyro as a short form of the word gyroscope, so in the ensuing treatment, these words are used
interchangeably.
Traditional gyroscopes were mechanical and based on angular momentum, but more recent ones are
based on either the Coriolis effect on a vibrating mass or the Sagnac interference-effect. There are three
main types of gyroscopes: mechanical gyroscopes, optical gyroscopes, and micro-electro-mechanical
system (MEMS) gyroscopes.

5|Page
Lecture 04 Navigation systems Dr. Akeel Ali Wannas

4.2.1. Gyroscope Measurements


Gyros measure the angular rate of a body with respect to the navigation frame, the rotation of the
navigation frame with respect to the Earth-fixed frame, and the rotation of the Earth as it spins on its
axis with respect to inertial space. These quantities are all expressed in the body frame and can be given
as
𝑏 𝑏 𝑏 𝑏
𝜔𝑖𝑏 = 𝜔𝑖𝑒 + 𝜔𝑒𝑛 + 𝜔𝑛𝑏
where
𝑏
𝜔𝑖𝑏 is the rotation rate of the body with respect to the i-frame
𝑏
𝜔𝑛𝑏 is the rotation rate of the body with respect to the navigation frame (also referred to as the n-frame)
𝑏
𝜔𝑒𝑛 is the rotation rate of the navigation frame with respect to the e-frame
𝑏
𝜔𝑖𝑒 is the rotation rate of the Earth with respect to the i-frame.

Exercises
1. Define the inertial navigation system.
2. What is the principle of inertial navigation?
3. What does the position estimation depend on in INS?
4. What is the working principle of the accelerometer?
5. What is the working principle of the gyroscope?
6. What are two implementation approaches to an INS? And make a comparison between them.
7. What are the three principal modules for INS? Number and explanation with illustration.
8. What is the meaning of the Inertial Measurement Unit?
9. How can gravitational affect the accelerometer? And explain how this problem is solved with an
equation.
10. Number of the frames used to describe the equation of angular rate in body frame.

6|Page
Navigation systems

Lectuerer

Dr. Akeel Ali Wannas

2021-2022
0|Page
Lecture 05 Navigation systems Dr. Akeel Ali Wannas

Inertial Navigation System INS


1. Basics of Inertial Navigation
As mentioned before, inertial positioning is based on the simple principle that differences in position
can be determined by a double integration of acceleration, sensed as a function of time in a well-defined
and stable coordinate frame. Mathematically, we can express this as
𝑡 𝑡

∆𝑃(𝑡) = 𝑝(𝑡) − 𝑃(𝑡𝑜 ) = ∫ ∫ 𝑎(𝑡)𝑑𝑡𝑑𝑡 (10)


𝑡𝑜 𝑡𝑜
where
𝑃(𝑡𝑜 ) is the initial point of the trajectory
𝑎(𝑡) is the acceleration along the trajectory obtained from inertial sensor measurements in the
coordinate frame prescribed by 𝑝(𝑡):

Fig. 7 One-dimensional (1D) inertial navigation, with the green cylinder depicting the accelerometer
1.1. Navigation in One Dimension
To comprehend the full-scale three-dimensional inertial system it is easier to start with an example of a
one-dimensional (1D) inertial system with a single axis. For this, consider a vehicle moving in a straight
line (i.e. in a fixed direction) as shown in Fig.7. To calculate its velocity and position, which are the
only unknowns in this case, we need only a single accelerometer mounted on the vehicle that has its
sensitive axis along the direction of motion.
With prior knowledge of the initial position 𝑦 = 𝑦0 and initial velocity 𝑣 = 𝑣0 of the vehicle, we are
able to calculate its velocity 𝑣𝑡 at any time 𝑡 by integrating the output of the accelerometer 𝑎𝑦 as follows

𝑣𝑡 = ∫ 𝑎𝑦 𝑑𝑡 = 𝑎𝑦 + 𝑣𝑜 (11)

A second integration will yield the position 𝑦𝑡 of the vehicle at time 𝑡

𝑦𝑡 = ∫ 𝑣𝑡 𝑑𝑡

𝑦𝑡 = ∫(𝑎𝑦 𝑡 + 𝑣𝑜 )𝑑𝑡 (12)

1|Page
Lecture 05 Navigation systems Dr. Akeel Ali Wannas

1
𝑦𝑡 = 𝑎𝑦 𝑡 2 + 𝑣𝑜 𝑡 + 𝑦𝑜
2
1.2. Navigation in Two Dimensions
Extending the concept of navigation from the simple 1D example to 2D makes the implementation more
complex, mainly because we need the acceleration to be in the same frame as the coordinate system.
This requires the transformation of the acceleration measured by the accelerometers from the INS frame
to a stable Earth-fixed coordinate frame. The stable Earth-fixed coordinate frame is often chosen as a
local-level frame that is referred to as the navigation frame. As stated earlier, the transformation can
either be established mechanically inside the INS by a stable platform or numerically as in the
strapdown concept.

Fig. 8 Inertial navigation using a 2D strapdown system


In 2D it is necessary to monitor both the translational motion of the vehicle in two directions and also its
rotational motion, manifested as a change in direction. Two accelerometers are required to detect the
acceleration in two directions. One gyroscope is required to detect the rotational motion in a direction
perpendicular to the plane of motion (for simplicity, we neglect the Earth’s rotation which would also be
detected). Based on the advantages provided by a strapdown system, from this point on we shall limit
our discussion to this type of system.
Strapdown systems mathematically transform the output of the accelerometers attached to the body into
the east-north coordinate system (the 2D form of ENU) prior to performing the mathematical
integration. These systems use the output of the gyroscope attached to the body to continuously update
the transformation that is utilized to convert from body coordinates to east-north coordinates. Figure 8
shows the concept of inertial navigation in 2D as a platform makes turns, rotating through an angle A
(called the azimuth angle1) measured from the north. The blue cylindrical objects depict the
accelerometers, the gyroscope is a blue disc whose sensitive axis depicted by a red dot points out of the
paper towards the reader.

1
The terms azimuth angle and yaw angle are both used to represent the deviation from the north. The difference lies in
the direction of measurement: the azimuth angle is measured clockwise from the north whereas the yaw angle is
measured counterclockwise.

2|Page
Lecture 05 Navigation systems Dr. Akeel Ali Wannas

The accelerometers measure the acceleration of the body axes (X and Y) but we need the acceleration in
the east-north coordinate system. This is accomplished using a transformation matrix which can be
explained with the help of the diagram shown in Fig. 9.

Fig. 9 Transformation from the vehicle frame (X-Y) to the navigation frame (E-N)
The vehicle axes 𝑋 and 𝑌 make an angle 𝐴 with the east and north directions respectively, and the
accelerations along east direction 𝑎𝐸 and the north direction 𝑎𝑁 can be written as
𝑎𝐸 = 𝑎𝑥 𝑐𝑜𝑠 𝐴 + 𝑎𝑦 𝑠𝑖𝑛 𝐴 (13)
𝑎𝑁 = − 𝑎𝑥 𝑠𝑖𝑛 𝐴 + 𝑎𝑦 𝑐𝑜𝑠 𝐴 (14)
which in the matrix form is
𝑎𝐸 𝑐𝑜𝑠 𝐴 𝑠𝑖𝑛 𝐴 𝑎𝑥
[𝑎 ] = [ ][ ] (15)
𝑁 −𝑠𝑖𝑛 𝐴 𝑐𝑜𝑠 𝐴 𝑎𝑦
and can be expressed more compactly as
𝑎𝑛 = 𝑅𝑏𝑛 𝑎𝑏 (16)
where
𝑎𝑛 is the acceleration in the navigation frame (E-N)
𝑎𝑏 is the acceleration in the body frame measured by the accelerometers
𝑅𝑏𝑛 is the rotation matrix which rotates ab to the navigation frame.
Given the accelerations in the navigation frame, we can integrate to obtain the velocities

𝑣𝐸 = ∫(𝑎𝑥 𝑐𝑜𝑠 𝐴 + 𝑎𝑦 𝑠𝑖𝑛 𝐴)𝑑𝑡


(17)
𝑣𝑁 = ∫(− 𝑎𝑥 𝑠𝑖𝑛 𝐴 + 𝑎𝑦 𝑐𝑜𝑠 𝐴)𝑑𝑡

3|Page
Lecture 05 Navigation systems Dr. Akeel Ali Wannas

and again to obtain the position in the navigation frame

𝑥𝐸 = ∫ ∫(𝑎𝑥 𝑐𝑜𝑠 𝐴 + 𝑎𝑦 𝑠𝑖𝑛 𝐴)𝑑𝑡 𝑑𝑡


(18)
𝑥𝑁 = ∫ ∫(− 𝑎𝑥 𝑠𝑖𝑛 𝐴 + 𝑎𝑦 𝑐𝑜𝑠 𝐴)𝑑𝑡 𝑑𝑡
which in the matrix form is
𝑥𝐸 𝑐𝑜𝑠 𝐴 𝑠𝑖𝑛 𝐴 𝑎𝑥
(𝑥 ) = ∬ ( ) ( ) 𝑑𝑡𝑑𝑡 (19)
𝑁 −𝑠𝑖𝑛 𝐴 𝑐𝑜𝑠 𝐴 𝑎𝑦
It may be noted that this whole process is dependent on knowing the azimuth angle A which is
calculated from the measurement by the gyroscope that monitors angular changes in the orientation of
the accelerometers from the local E-N frame. These angular changes resolve the accelerometer
measurements from the sensor axes into the local E-N axes. This angular change also determines the
direction of motion of the moving platform defined by the azimuth angle, which is also known as the
heading angle because it is the deviation from the north direction in the E-N plane. This is based on
mathematically integrating the gyroscope angular velocity measurements relative to the initial azimuth
angle 𝐴0 as follows

𝐴(𝑡) = ∫ 𝜔𝑔𝑦𝑟𝑜 𝑑𝑡 + 𝐴𝑜 (20)


In this equation, it should be noted (as pointed out previously) that the Earth’s rotation components have
been neglected for simplicity and ease of understanding of the basic concept of navigation.

1.3. Navigation in Three Dimensions


Inertial navigation in three dimensions (3D) requires three gyroscopes to measure the attitude angles of
the body (pitch, roll, and azimuth) and three accelerometers to measure accelerations along the three
axes (in the east, north, and up directions). Another complication is the involvement of gravity in the
accelerations. The total acceleration encountered by the body is what is measured by the accelerometers,
a combination of the acceleration due to gravity and that due to all other external forces. In order to
remove the component of acceleration due to gravity, the tilt (or attitude) of the accelerometer with
respect to the local vertical must be supplied by the gyroscope. where knowledge of 2D navigation will
assist in understanding the subject.
The operation of an INS is based on processing the inertial sensor measurements received at its input
and yielding a set of navigation parameters (position, velocity, and attitude) of the moving platform at
its output. In general, these parameters are determined in a certain reference frame. Figure 10 shows the
general concept of the inertial navigation system.

Fig. 10 The concept of inertial navigation

4|Page
Lecture 05 Navigation systems Dr. Akeel Ali Wannas

The accelerometers are attached to the moving platform in order to monitor its accelerations in three
mutually orthogonal directions. The gyroscopes provide the attitude (pitch, roll, and azimuth) of the
moving platform, and their measurements are used to rotate the data from the accelerometers into the
navigation frame. The time integral of each acceleration component gives a continuous estimate of the
corresponding velocity component of the platform relative to the initial velocities. A second integration
yields the position with respect to a known starting point in a given frame of reference. This principle is
outlined in Fig. 11.

Fig. 11 The general principle of inertial navigation in 3D

Example 2:
The accelerometer and gyroscope devices are installed on the body of a UAV (strapdown IMU)
traveling on a plane. These devices give the data shown in the table below. Calculate the linear
displacement, linear velocity, linear acceleration, and azimuth angle relative to the navigation axes
system.
Time (t) ax (m/s^2) ay (m/s^2) wgyro (deg)
0 0 0 0
1 1 1 5
2 2 2 10
3 3 3 5
4 4 4 10
5 3 3 6
6 2 2 2
7 1 1 -2
8 0 0 -6
9 -1 -1 -10
10 -2 -2 -14

5|Page
6|Page
The figure shows the movement of the UAV in navigation frame.

7 | Page
Exercises

1. How many gyros and accelerometers should be in a vehicle moving in a one-dimensional and
what the axes should be placed around and along with them?
2. How many gyros and accelerometers should be in a vehicle moving in a two-dimensional and
what the axes should be placed around and along with them?
3. How many gyros and accelerometers should be in a vehicle moving in a three-dimensional and
what the axes should be placed around and along with them?
4. Write the transformation matrix which transforms from the vehicle frame (X-Y) to the
navigation frame (E-N) in 2-D with illustration.
5. Explain the concept of inertial navigation in the three-dimension with illustration.
6. Define the azimuth angle and yaw angle and, what is the difference between them?

8|Page
Navigation systems

Lectuerer

Dr. Akeel Ali Wannas

2021-2022
Lecture 06 Navigation systems Dr. Akeel Ali Wannas

Inertial Navigation System INS


1. Inertial Sensor Performance Characteristics
To assess an inertial sensor for a particular application, numerous characteristics must be considered.
But first we will introduce some general terms.
a. Repeatability: The ability of a sensor to provide the same output for repeated applications of the
same input, presuming all other factors in the environment remain constant. It refers to the maximum
variation between repeated measurements in the same conditions over multiple runs.
b. Stability: This is the ability of a sensor to provide the same output when measuring a constant input
over a period of time. It is defined for single run.
c. Drift: The term drift is often used to describe the change that occurs in a sensor measurement when
there is no change in the input. It is also used to describe the change that occurs when there is zero input.
The performance characteristics of inertial sensors (either accelerometers or gyroscopes) are usually
described in terms of the following principal parameters: sensor bias, sensor scale factor, noise and
bandwidth. These parameters (among others) will be discussed in the next section, which deals with the
errors of inertial sensors.
2. Inertial Sensor Errors
Inertial sensors are prone to various errors which get more complex as the price of the sensor goes
down. The errors limit the accuracy to which the observables can be measured. They are classified
according to two broad categories of systematic and stochastic (or random) errors.
2.1. Systematic Errors
These types of errors can be compensated by laboratory calibration, especially for high-end sensors.
Some common systematic sensor errors are described below.
1. Systematic Bias Offset
This is a bias offset exhibited by all accelerometers and gyros. It is defined as the output of the sensor
when there is zero input, and is depicted in Fig. 1. It is independent of the underlying specific force and
angular rate.
2. Scale Factor Error
This is the deviation of the input–output gradient from unity. The accelerometer output error due to scale
factor error is proportional to the true specific force along the sensitive axis, whereas the gyroscope
output error due to scale factor error is proportional to the true angular rate about the sensitive axis.
Figure 2 illustrates the effect of the scale factor error.
3. Non-linearity
This is non-linearity between the input and the output, as shown in Fig. 3.
4. Scale Factor Sign Asymmetry
This is due to the different scale factors for positive and negative inputs, as shown in Fig. 4.

2 | Page
Lecture 06 Navigation systems Dr. Akeel Ali Wannas

Figure 1 Inertial sensor bias Figure 2 Inertial sensor scale factor error

Figure 3 Non-linearity of inertial sensor output Figure 4 Scale factor sign asymmetry

Figure 5 Dead zone in the output of an inertial sensor Figure 6 The error due to quantization of an analog signal to a
digital signal

5. Dead Zone

3 | Page
Lecture 06 Navigation systems Dr. Akeel Ali Wannas

This is the range where there is no output despite the presence of an input, and it is shown in Fig. 5.
6. Quantization Error
This type of error is present in all digital systems which generate their inputs from analog signals, and is
illustrated in Fig. 6.
7. Non-orthogonality Error
As the name suggests, non-orthogonality errors occur when any of the axes of the sensor triad depart
from mutual orthogonality. This usually happens at the time of manufacturing. Figure 7 depicts the case
of the z-axis being misaligned by an angular offset of 𝜃𝑧𝑥 from xz-plane and 𝜃𝑧𝑦 from the yz-plane.

8. Misalignment Error
This is the result of misaligning the sensitive axes of the inertial sensors relative to the orthogonal axes
of the body frame as a result of mounting imperfections. This is depicted in Fig. 8 for a sensor frame
misalignment (using superscript ‘s’) with respect to the body in a 2D system in which the axes are offset
by the small angle 𝛿𝜃.

Figure 7 Sensor axes nonorthogonality error Figure 8 Misalignment error between the body frame and the sensor
axes

4 | Page
Lecture 06 Navigation systems Dr. Akeel Ali Wannas

2.2. Random Errors


Inertial sensors suffer from a variety of random errors which are usually modeled stochastically in order
to mitigate their effects.
1. Run-to-Run Bias Offset
If the bias offset changes for every run, this falls under the bias repeatability error, and is called the run-
to-run bias offset.
2. Bias Drift
This is a random change in bias over time during a run. It is the instability in the sensor bias for a single
run, and is called bias drift. It is illustrated in Fig. 9. Bias is deterministic but bias drift is stochastic. One
cause of bias drift is a change in temperature.

Figure 9 Error in sensor output due to bias drift Figure 10 A depiction of white noise error

3. Scale Factor Instability


Random changes in scale factor during a single run. This is usually the result of temperature variations.
The scale factor can also change from run to run, but stay constant during a particular run. This
demonstrates the repeatability of the sensor and is also called the run-to-run scale factor.
4. White Noise
This is an uncorrelated noise that is evenly distributed in all frequencies. This type of noise can be
caused by power sources but can also be intrinsic to semiconductor devices. White noise is illustrated in
Fig. 10.

5 | Page
Lecture 06 Navigation systems Dr. Akeel Ali Wannas

Exercises
1. What are Inertial Sensor Performance Characteristics? Number and explain it.
2. What are the types of Inertial Sensor Errors?
3. What are the types of systematic Errors? Number and explain with illustration draw.
4. What are the types of Random Errors? Number and explain with illustration draw.
5. What kind of errors do the graphs below represent? Give a name with an explanation.

1 2

3 4

6 | Page
Lecture 06 Navigation systems Dr. Akeel Ali Wannas

5 6

7 8

7 | Page
Navigation systems

Lectuerer

Dr. Akeel Ali Wannas

2021-2022
Lecture 07 Navigation systems Dr. Akeel Ali Wannas

INS/GPS Integration
1. Introduction

There are contrasting pros and cons to INS and GPS. An INS is a self-contained autonomous
navigation system that provides a bandwidth exceeding 200 Hz. It has good short-term accuracy
and provides attitude information in addition to position and velocity. But long term errors grow
without bound as the inertial sensor errors accumulate due to intrinsic integration in the
navigation algorithm. In contrast to an INS, GPS has good long-term accuracy with errors
limited to a few meters and user hardware costing as little as $100. But it has poor short-term
accuracy and a lower output data rate. A standard GPS receiver usually does not provide
attitude information, but it can with extra hardware and software. GPS needs a direct line of
sight to at least four satellites, which is not always possible because the signals from satellites
suffer from obstruction by tall buildings, trees, tunnels, degradation through the atmosphere, and
multipath interference.

Capitalizing on the complementary characteristics of these two systems, their synergistic


integration overcomes their individual drawbacks and provides a more accurate and robust
navigation solution than either could achieve on its own. The integrated navigation solution is a
continuous high data rate system that provides a full navigation solution (position, velocity and
attitude) with improved accuracy in both the short and long term. Optimal estimation techniques,
predominantly based on Kalman filtering, are employed to optimally fuse the GPS and INS
positioning and navigation information to yield a reliable navigation solution. GPS prevents the
inertial solution from drifting and INS provides continuity in the navigational solution, attitude
information, and bridges GPS signal outages. A typical INS/GPS integration is depicted in Fig.1.

The estimator compares the outputs of the INS and GPS and estimates errors in inertial position,
velocity, and attitudes, plus some other parameters. Traditionally the estimator is a KF. In Fig. 1
the inertial output is corrected using the estimated errors to produce the integrated navigation
solution. Dotted lines in the figure depict the optional paths, the presence of which depends upon
the specific type of integrations scheme.

Figure 1 An overview of a typical INS/GPS integration

2 | Page
Lecture 07 Navigation systems Dr. Akeel Ali Wannas

2. Kalman Filter
For statistics and control theory, Kalman filtering, also known as linear quadratic estimation (LQE), is
an algorithm that uses a series of measurements observed over time, including statistical noise and other
inaccuracies, and produces estimates of unknown variables that tend to be more accurate than those
based on a single measurement alone, by estimating a joint probability distribution over the variables for
each timeframe. The filter is named after Rudolf E. Kaman, who was one of the primary developers of
its theory.
Kalman filtering has numerous technological applications. A common application is for guidance,
navigation, and control of vehicles, particularly aircraft, spacecraft, and ships positioned dynamically.

Figure 2 Typical use of KF in a navigation application

Exercises
1. what are the advantage and disadvantages of INS?
2. what are the advantage and disadvantages of GPS?
3. why do we use the Kalman Filter ?
4. why do we use the INS/GPS Integration?

3 | Page

You might also like