Instrumentation For Space Technology FINAL

You might also like

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 122

Class Note

Instrumentation and Electronics Engineering


INTRODUCTION
Astronomers are constantly learning new star facts as space exploration and technology evolves. To make sense
of these new findings, it's important to know the basics.
Stars are giant, luminous spheres of plasma. There are billions of them — including our own sun — in the
Milky Way galaxy and there are billions of galaxies in the universe. So far, we have learned that hundreds also
have planets orbiting them. Since the dawn of recorded civilization, stars played a key role in religion and
proved vital to navigation, according to the International Astronomical Union.
 Astronomy, the study of the heavens, maybe the
most ancient of the sciences. The invention of
the telescope and the discovery of the laws of
motion and gravity in the 17th century prompted
the realization that stars were just like the sun,
all obeying the same laws of physics. In the 19th
century, photography and spectroscopy — the
study of the wavelengths of light that objects
emit — made it possible to investigate the
compositions and motions of stars from afar,
leading to the development of astrophysics. 
In 1937, the first radio telescope was built,
enabling astronomers to detect otherwise
invisible radiation from stars. The first gamma-ray telescope launched in 1961, pioneering the study of star
explosions (supernovae). Also in the 1960s,
astronomers commenced infrared observations
using balloon-borne telescopes, gathering
information about stars and other objects based on
their heat emissions; the first infrared telescope
(the Infrared Astronomical Satellite) launched in
1983. Microwave emissions were first studied
from space in 1992, with NASA's Cosmic
Microwave Background Explorer (COBE)
satellite. (Microwave emissions are generally used
to probe the young universe's origins, but they are
occasionally used to study stars.) In 1990, the first space-based optical telescope, the Hubble Space Telescope,
was launched, providing the deepest, most
detailed visible-light view of the universe. A
couple of examples are the Extremely Large
Telescope (ELT), which is planned to start
observations in 2024 in infrared and optical
wavelengths. Also, NASA's James Webb Space
Telescope – billed as a successor to Hubble –
will launch in 2018 to probe stars in infrared
wavelengths. Astronomers now often use
constellations in the naming of stars. The
International Astronomical Union, the world
authority for assigning names to celestial
objects, officially recognizes 88 constellations.
Usually, the brightest star in a constellation has "alpha," the first letter of the Greek alphabet, as part of its
scientific name. The second brightest star in a constellation is typically designated "beta," the third brightest
"gamma," and so on until all the Greek letters are used, after which numerical designations follow. A number of
stars have possessed names since antiquity — Betelgeuse, for instance, means "the hand (or the armpit) of the
giant" in Arabic. It is the brightest star in Orion, and its scientific name is Alpha Orionis. Also, different
astronomers over the years have compiled star catalogs that use unique numbering systems. The Henry Draper
Catalog, named after a pioneer in astrophotography, provides spectral classification and rough positions for
272,150 stars and has been widely used of by the astronomical community for over half a century. The catalog
designates Betelgeuse as HD 39801.
Since there are so many stars in the universe, the IAU uses a different system for newfound stars. Most consist
of an abbreviation that stands for either the type of star or a catalog that lists information about the star, followed
by a group of symbols. For instance, PSR J1302-6350 is a pulsar, thus the PSR. The J reveals that a coordinate
system known as J2000 is being used, while the 1302 and 6350 are coordinates similar to the latitude and
longitude codes used on Earth.

In recent years, the IAU formalized several names for stars amid calls from the astronomical community to
include the public in their naming process. The IAU formalized 14 star names in the 2015 "Name ExoWorlds"
contest, taking suggestions from science and astronomy clubs around the world. Then in 2016, the IAU
approved 227 star names, mostly taking cues from antiquity in making its decision. The goal was to reduce
variations in star names and also spelling ("Formalhaut", for example, had 30 recorded variations.) However, the
long-standing name "Alpha Centauri" – referring to a famous star

A star develops from a giant, slowly rotating cloud that is made up entirely or almost entirely of hydrogen and
helium. Due to its own gravitational pull, the cloud behind to collapse inward, and as it shrinks, it spins more
and more quickly, with the outer parts becoming a disk while the innermost parts become a roughly spherical
clump. 
According to NASA, this collapsing material grows hotter and denser, forming a ball-shaped protostar. When
the heat and pressure in the protostar reaches about 1.8 million degrees Fahrenheit (1 million degrees Celsius),
atomic nuclei that normally repel each other start fusing together, and the star ignites. Nuclear fusion converts a
small amount of the mass of these atoms into extraordinary amounts of energy — for instance, 1 gram of mass
converted entirely to energy would be equal to an explosion of roughly 22,000 tons of TNT.
2. STELLER EVALUATION
The life cycles of stars follow patterns based mostly on their initial mass. These include intermediate-mass stars
such as the sun, with half to eight times the mass of the sun, high-mass stars that are more than eight solar
masses, and low-mass stars a tenth to half a solar mass in size. The greater a star's mass, the shorter its lifespan
generally is, according to NASA. Objects smaller than a tenth of a solar mass do not have enough gravitational
pull to ignite nuclear fusion — some might become failed stars known as brown dwarfs.
An intermediate-mass star begins with a cloud that takes about
100,000 years to collapse into a protostar with a surface
temperature of about 6,750 F (3,725 C). After hydrogen fusion
starts, the result is a T-Tauri star, a variable star that fluctuates in
brightness. This star continues to collapse for roughly 10 million
years until its expansion due to energy generated by nuclear fusion
is balanced by its contraction from gravity, after which point it
becomes a main-sequence star that gets all its energy from
hydrogen fusion in its core.
The greater the mass of such a star, the more quickly it will use its
hydrogen fuel and the shorter it stays on the main sequence. After all the hydrogen in the core is fused into
helium, the star changes rapidly — without nuclear radiation to resist it, gravity immediately crushes matter
down into the star's core, quickly heating the star. This causes the star's outer layers to expand enormously and
to cool and glow red as they do so, rendering the star a red giant.  Helium starts fusing together in the core, and
once the helium is gone, the core contracts and becomes hotter, once more expanding the star but making it
bluer and brighter than before, blowing away its outermost layers.
After the expanding shells of gas fade, the remaining core is left,
a white dwarf that consists mostly of carbon and oxygen with an
initial temperature of roughly 180,000 degrees F (100,000
degrees C). Since white dwarves have no fuel left for fusion, they
grow cooler and cooler over billions of years to become black
dwarves too faint to detect. Our sun should leave the main
sequence in about 5 billion years, according to Live Science.
A high-mass star forms and dies quickly. These stars form from protostars in just 10,000 to 100,000 years.
While on the main sequence, they are hot and blue,
some 1,000 to 1 million times as luminous as the sun
and are roughly 10 times wider. When they leave the
main sequence, they become a bright red supergiant,
and eventually become hot enough to fuse carbon into
heavier elements. After some
10,000 years of such fusion,
the result is an iron core
roughly 3,800 miles wide
(6,000 km), and since any
more fusion would consume
energy instead of liberating
it, the star is doomed, as its
nuclear radiation can no
longer resist the force of
gravity.When a star reaches a
mass of more than 1.4 solar
masses, electron pressure
cannot support the core
against further collapse, according to NASA. The result is a supernova. Gravity causes the core to collapse,
making the core temperature rise to nearly 18 billion degrees F (10 billion degrees C), breaking the iron down
into neutrons and neutrinos. In about one second, the core shrinks to about six miles (10 km) wide and rebounds
just like a rubber ball that has been squeezed, sending a shock wave through the star that causes fusion to occur
in the outlying layers. The star then explodes in a so-called Type II supernova. If the remaining stellar core was
less than roughly three solar masses large, it becomes a neutron star made up nearly entirely of neutrons, and
rotating neutron stars that beam out detectable radio
pulses are known as pulsars. If the stellar core was
larger than about three solar masses, no known force
can support it against its own gravitational pull, and it
collapses to form a black hole.
A low-mass star uses hydrogen fuel so sluggishly that
they can shine as main-sequence stars for 100 billion to
1 trillion years — since the universe is only about 13.7
billion years old, according to NASA, this means no
low-mass star has ever died. Still, astronomers
calculate these stars, known as red dwarfs, will never
fuse anything but hydrogen, which means they will
never become red giants. Instead, they should
eventually just cool to become white dwarfs and then
black dwarves.
Although our solar system only has one star, most stars like our sun are not solitary, but are binaries, where two
stars, or multiple stars orbit each other. In fact, just one-third of stars like our sun are single, while two-thirds are
multiples — for instance, the closest neighbor to our solar system, Proxima Centauri, is part of a multiple
system that also includes Alpha Centauri A and Alpha Centauri B. Still, class G stars like our sun only make up
some 7 percent of all stars we see — when it comes to systems in general, about 30 percent in our galaxy are
multiple, while the rest are single, according to Charles J. Lada of the Harvard-Smithsonian Center for
Astrophysics.

Binary stars develop when two protostars form near each other. One member of this pair can influence its
companion if they are close enough together, stripping away matter in a process called mass transfer. If one of
the members is a giant star that leaves behind a neutron star or a black hole, an X-ray binary can form, where
matter pulled from the stellar remnant's companion can get extremely hot — more than 1 million F (555,500 C)
and emit X-rays. If a binary includes a white dwarf, gas pulled from a companion onto the white dwarf's surface
can fuse violently in a flash called a nova. At times, enough gas builds up for the dwarf to collapse, leading its
carbon to fuse nearly instantly and the dwarf to explode in a Type I supernova, which can outshine a galaxy for
a few months.

Proxima Centauri has about an eighth of the mass of the sun. Faint red Proxima Centauri - at only 3,100 degrees
K (5,120 F) and 500 times less bright than our sun - is nearly a fifth of a light-year from Alpha Centauri A and
B. That raises some question about whether it is gravitationally bound to Alpha Centauri A and B.

KEY CHARACTERISTICS

Brightness
Astronomers describe star brightness in terms of magnitude and luminosity. The magnitude of a star is based on
a scale more than 2,000 years old, devised by Greek astronomer Hipparchus around 125 BC, according to
NASA. He numbered groups of stars based on their brightness as seen from Earth — the brightest stars were
called first magnitude stars, the next brightest were second magnitude, and so on up to sixth magnitude, the
faintest visible ones. Nowadays astronomers refer to a star's brightness as viewed from Earth as its apparent
magnitude, but since the distance between Earth and the star can affect the light one sees from it, they now also
describe the actual brightness of a star using the term absolute magnitude, which is defined by what its apparent
magnitude would be if it were 10 parsecs or 32.6 light years from Earth. The magnitude scale now runs to more
than six and less than one, even descending into negative numbers — the brightest star in the night sky is  Sirius,
with an apparent magnitude of -1.46.
Luminosity is the power of a star — the rate at which it emits energy. Although power is generally measured in
watts — for instance, the sun's luminosity is 400 trillion trillion watts— the luminosity of a star is usually
measured in terms of the luminosity of the sun. For example, Alpha Centauri A is about 1.3 times as luminous
as the sun. To figure out luminosity from absolute magnitude, one must calculate that a difference of five on the
absolute magnitude scale is equivalent to a
factor of 100 on the luminosity scale — for
instance, a star with an absolute magnitude of 1
is 100 times as luminous as a star with an
absolute magnitude of 6. The brightness of a
star depends on its surface temperature and
size.
Colour
Stars come in a range of colours, from reddish
to yellowish to blue. The colour of a star
depends on surface temperature. A star might
appear to have a single colour, but actually
emits a broad spectrum of colors, potentially
including everything from radio waves and
infrared rays to ultraviolet beams and gamma
rays. Different elements or compounds absorb and emit different colours or wavelengths of light, and by
studying a star's spectrum, one can divine what its composition might be.
Surface temperature
Astronomers measure star temperatures in a unit known as the kelvin, with a temperature of zero K ("absolute
zero") equaling minus 273.15 degrees C, or minus 459.67 degrees F. A dark red star has a surface temperature
of about 2,500 K (2,225 C and 4,040 F); a bright red star, about 3,500 K (3,225 C and 5,840 F); the sun and
other yellow stars, about 5,500 K (5,225 C and 9,440 F); a blue star, about 10,000 K (9,725 C and 17,540 F) to
50,000 K (49,725 C and 89,540 F). The surface temperature of a star depends in part on its mass and affects its
brightness and color. Specifically, the luminosity of a star is proportional to temperature to the fourth power. For
instance, if two stars are the same size but one is twice as hot as the other in kelvin, the former would be 16
times as luminous as the latter.
Size
Astronomers generally measure the size of stars
in terms of the radius of our sun. For instance,
Alpha Centauri A has a radius of 1.05 solar radii
(the plural of radius). Stars range in size from
neutron stars, which can be only 12 miles (20
kilometres) wide, to supergiants roughly 1,000
times, the diameter of the sun. The size of a star
affects its brightness. Specifically, luminosity is
proportional to the radius squared. For instance,
if two stars had the same temperature, if one star
was twice as wide as the other one, the former
would be four times as bright as the latter.
Mass
Astronomers represent the mass of a star in
terms of the solar mass, the mass of our sun. For
instance, Alpha Centauri A is 1.08 solar masses.
Stars with similar masses might not be similar
in size because they have different densities. For
instance, Sirius B is roughly the same mass as the sun but is 90,000 times as dense, and so is only a fiftieth its
diameter. The mass of a star affects surface temperature.
Magnetic field
Stars are spinning balls of roiling, electrically charged gas, and thus typically generate magnetic fields. When it
comes to the sun, researchers have discovered its magnetic field can become highly concentrated in small areas,
creating features ranging from sunspots to spectacular eruptions known as flares and coronal mass ejections. A
survey at the Harvard-Smithsonian Center for Astrophysics found that the average stellar magnetic field
increases with the star's rate of rotation and decreases as the star ages.

Metallicity
The metallicity of a star measures the amount of "metals" it has — that is, any element heavier than helium.
Three generations of stars may exist based on metallicity. Astronomers have not yet discovered any of what
should be the oldest generation, Population III stars born in a universe without "metals." When these stars died,
they released heavy elements into the cosmos, which Population II stars incorporated relatively small amounts
of. When a number of these died, they released more heavy elements, and the youngest Population I stars like
our sun contain the largest amounts of heavy elements.
The structure of a star can often be thought of as a series of thin nested shells, somewhat like an onion. A star
during most of its life is a main-sequence star, which consists of a core, radiative and convective zones, a
photosphere, a chromosphere and a corona. The core is where all the nuclear fusion takes places to power a star. 
In the radiative zone, energy from these reactions is transported outward by radiation, like heat from a light
bulb, while in the convective zone, energy is transported by the roiling hot gases, like hot air from a hairdryer.
Massive stars that are more than several times the mass of the sun are convective in their cores and radiative in
their outer layers, while stars comparable to the sun or less in mass are radiative in their cores and convective in
their outer layers. Intermediate-mass stars of spectral type A may be radiative throughout.

After those zones comes the part of the star


A parsec is a unit of length used in astronomy to
that radiates visible light, the photosphere,
measure large distances to objects in space, such as
which is often referred to as the surface of
stars and galaxies. It is defined as the distance at
the star. After that is the chromosphere, a
which one astronomical unit (AU) subtends an angle layer that looks reddish because of all the
of one arcsecond. This means that if an object is one hydrogen found there. Finally, the outermost
parsec away, and is being viewed from two different part of a star's atmosphere is the corona,
points in space that are one AU apart, the angle which if super-hot might be linked with
between the two lines of sight to the object will be one convection in the outer layers.
arcsecond. One parsec is approximately equal to 3.26
light-years, or 31 trillion kilometers (19 trillion miles).
It is used by astronomers to express large distances in
a more manageable form..
3.
STANDARD SOLAR MODEL
In 1939, Chandrasekhar presented a detailed discussion on the theory of polytropes in his book “An
Introduction to the Study of Stellar Structure”. In his book he developed the expressions for a polytrope at its
latest stage i.e. when its complete degeneracy occurs. He studied and investigated the evolution of white dwarf
members and on the basis of his theory he gave an expression of ‘Limiting Mass’ for the white dwarfs. A white
dwarf star of mass more than about 1.4 of a solar mass would have to collapse inwards, its density increasing
indefinitely as the body approached a singular configuration at the centre. The standard model of the sun is
defined as the model which is based on the most plausible assumption and signifies that experimental,
observational and theoretical approaches mainly depend on the physical and chemical changes of the sun. There
are two separate dynamos operating in the Sun: a deep-seated helical dynamo, responsible for the generation
of the strong magnetic fields of active regions, and a near-surface chaotic dynamo, producing weak network
fields. How these two dynamos interact with each other and what their roles are in the overall solar cycle are
unknown. With respect to other stars, helical dynamos are typical for solar-type dwarfs, while non-helical
(distributed) dynamos can operate on
other, e.g., T Tauri and low-mass, fully
convective dwarf stars. Since our
observational capabilities are still a long
way from resolving the fundamental scales
of magnetic fields on the surfaces of other
stars, the Sun offers the unique
opportunity to study these dynamo
processes directly.
The standard solar model has
four basic assumptions.The first is that the
sun evolves in hydrostatic equilibrium.
Hydrostatic equilibrium implies a local balance between pressure and gravity, use to express as
dP −Gmρ
= , Where P is the pressure, ρ is density, m is mass, r is the radius of that spherical ball .
dr r
2

to explain this condition, temperature ,mass, density and the composition must be considered together and hence
they are used to express as an equation of state , this can be started from the gas law
ρTR
P= ,
μ
T is the abolute temperature , R is constant , μ is the Mean Molecular weight
2nd assumption made on the energy transfers methods from star like the Sun. The method of energy
transfer from the sun was considered as radiation, convection, conduction and neutrino loss. Major part of the
energy transferred actually is considered by Radiation, and convection method. It is measured as the temperature
gradient of a process:
dT kρL
=−0.1875
dr πac r 2 T 3
Where k is an opacity of stellar model or a function of the composition of gases and of its physical
conditions required detailed knowledge of all the processes important for radiative flux (elastic and inelastic
scattering, absorption and emission, inverse beams straight lump) and in turn a detailed evolution of the atomic
level in the solar interior, and L is defined as Luminosity or Electromagnetic energy flux. The above equation
represents the radiation method of heat transfer, whereas Heat transfer through the convection method is
dT
dr ( )( )
1 T
= 1− .
dP
.( )
γ P dr

Where γ is ratio of specific heat i.e. Cp/Cv ,.


The third assumption of the model is made on energy production in the star. It was assumed that the
thermonuclear reactions are the only source of energy production inside the star (3). The structural changes in
the Sun, as it evolves, are caused by the nuclear reactions occurring in the central regions of the Sun. The
transmutation of four hydrogen atoms into one helium reduces the number density of particles in the central
regions which decreases the pressure. The pressure decrease does not actually occur because the surrounding
layers respond to the force imbalance by contracting in the central regions . Half of the gravitational energy
released from the contraction goes to raising the temperature of the central regions (the other half, according to
the Virial Theorem, is radiated away). However, there are two processes, or chains of reactions, that are
responsible for the fusion in our Sun, these are the proton-proton (pp) chain and the carbon-nitrogen-oxygen
(CNO) chain. These chains are described in more detail below. Fusion reactions require high densities and
temperatures to take place, and therefore are predominantly found in stellar cores.
Luminosity formulae

Point source S is radiating light equally in all directions. The amount passing through an area A varies with the
distance of the surface from the light.

The Stefan–Boltzmann equation applied to a black body gives the value for luminosity for a black body, an
idealized object which is perfectly opaque and non-reflecting:

L=αA T 4 & F=L/A


where A is the surface area, T is the temperature (in kelvins) and σ is the Stefan–Boltzmann constant, with a
value of L  5.670374419...×10−8 W⋅m−2⋅K−4.A is the area of the illuminated surface. F is the flux density of the
illuminated surface. Imagine a point source of light of luminosity L that radiates equally in all directions. A
hollow sphere centered on the point would have its entire interior surface illuminated. As the radius increases,
the surface area will also increase, and the constant luminosity has more surface area to illuminate, leading to a
decrease in observed brightness. The surface area of a sphere with radius r is  A=π r 2 , so for stars and other
point sources of light:

F=L/( π r 2)

where r is the distance from the observer to the light source.


For stars on the main sequence, luminosity is also related to mass approximately as below:

( )
3.5
L M
=
L¿ M ¿

If we define M as the mass of the star in terms of solar masses, the above relationship can be simplified as
follows: L=M 3.5
Luminosity is an intrinsic measurable property of a
star independent of distance. The concept of
magnitude, on the other hand, incorporates
distance. The apparent magnitude is a measure of
the diminishing flux of light as a result of distance
according to the inverse-square law. The Pogson
logarithmic scale is used to measure both apparent
and absolute magnitudes, the latter corresponding to
the brightness of a star or other celestial body as
seen if it would be located at an interstellar
distance of 10 parsecs (3.1×1017 metres). In addition to this brightness decrease from
increased distance, there is an extra decrease of brightness due to extinction from intervening
interstellar dust.

By measuring the width of certain absorption lines in the stellar spectrum, it is often possible
to assign a certain luminosity class to a star without knowing its distance. Thus a fair measure

In the magnitude handout, we distinguished between two different magnitudes: the apparent magnitude,
which indicates how bring an object appears to be, and absolute magnitude, which indicates a star’s true
brightness, or luminosity.  The only reason those two numbers are different for various stars is because
every star is not the same distance from us.  We can take advantage of this by using the difference between
a star’s apparent magnitude and absolute magnitude to actually calculate the distance of the star.  This
difference is called the distance modulus, m – M.
Recall that apparent magnitude is a measure of how bright a star appears from Earth, at its “true
distance,” which we call D.  Absolute magnitude is the magnitude the star would have if it were at a
standard distance of 10 parsecs away.  So this presents us with three general possibilities for the value of
the distance modulus:
If the star is exactly 10 parsecs away (rare, but it does happen), the absolute
magnitude will be the same as the apparent magnitude.  The apparent magnitude is actually a
good indicator of true luminosity.  Thus, if m – M = 0, then the distance D = 10 pc.

If the star is closer than 10 parsecs, then the star will appear deceptively bright; its
apparent magnitude will be too bright to tell us its true luminosity.  The star looks brighter
than it actually is.  Remember that the magnitude system is “backwards,” in that lower
numbers mean brighter stars.  Therefore, in the case where the star is closer than 10 parsecs,
the apparent magnitude will be a lower number (brighter) than the absolute magnitude, and m
– M will be a negative number.  So if m – M < 0, then the distance D < 10 pc.

If the star is farther than 10 parsecs, then the star will appear deceptively dim; its
of its absolute magnitude can be determined without knowing neither its distance nor the
interstellar extinction.
In measuring star brightness’s absolute magnitude, apparent magnitude, and distance are
interrelated parameters—if two are known, the third can be determined. Since the Sun's
luminosity is the standard, comparing these parameters with the Sun's apparent magnitude
and distance is the easiest way to remember how to convert between them, although
officially, zero-point values are defined by the IAU.
The magnitude of a star, a unitless measure, is a logarithmic scale of observed visible
brightness. The apparent magnitude is the observed visible brightness from Earth which
depends on the distance of the object. The absolute magnitude is the apparent magnitude at a
distance of 10 pc (3.1×1017 m), therefore the bolometric absolute magnitude is a logarithmic
measure of the bolometric luminosity.
The difference in bolometric magnitude between two objects is related to their luminosity
ratio according to:[19]

L1
M BOL 1−M BOL 2=
L2
where:
● M BOL1  is the bolometric magnitude of the first object
● M BOL2  is the bolometric magnitude of the second object.
● L1 is the first object's bolometric luminosity
● L2 is the second object's bolometric luminosity

The zero point of the absolute magnitude scale is actually defined as a fixed luminosity
of 3.0128×1028 W. Therefore, the absolute magnitude can be calculated from luminosity in
watts:

where L0 is the zero point luminosity 3.0128×1028 W

and the luminosity in watts can be calculated from an absolute magnitude (although absolute

magnitudes are often not measured relative to an absolute flux): A measure of the
luminosity in star created by these reactions is given by

dL
dr (
=4 π r 2 ρ ε+ T
dS
dt )
, Where ε is the rate of energy production. Ergs.g.sec-1 ,

Absolute Magnitude: the apparent magnitude that a star would have if it were, in our imagination, placed at a
distance of 10 parsecs or 32.6 light years from the Earth.

The Distance Modulus

From the definitions for absolute magnitude M and apparent magnitude m, and some algebra, m and M are related
by the logarithmic equation

M = m - 5 log [d(pc) / 10]

which permits us to calculate the absolute magnitude from the apparent magnitude and the distance. This equation
can be rewritten as

d(pc) = 10(m - M + 5) / 5

The quantity m - M is called the distance modulus, since it is a measure of how distant the star is.
S is the entropy per unit mass. The T dS/dt term ("entropy term") is the sole term in the basic
stellar evolution equations that includes time explicitly.

The final assumption of the standard solar model is that the sun was initially of
homogeneous, primordial composition, and highly convective at its main sequence turn on.
Since heavy elements are neither created nor destroyed in the thermonuclear reactions in a
solar-type star, they provide a record of the initial abundances, and only the relative amounts
of hydrogen and 4helium are an indicator of stellar evolution.
For the understanding of physical and chemical structures, the theory of stellar
structure evolution enunciates that the condition for the hydrostatic equilibrium jointly with
the conservation of energy and the mechanism for energy transport determines the physical
structure of a star-like Sun. The above four assumptions or inputs can be substantial as
● The equation of state for stellar matter
● The radiative opacity (k) as a function of density (ρ) , temperature (T), and chemical
composition
● The energy production per unit mass and time, again as a function of ρ, T, and
chemical changes
● Initial chemical composition
From one of the assumption it is needed to point out that the radiative opacity k is
directly connected with the photon means free path, A=1/kρ . All throughout the internal
radiative region k governs the temperature gradient through the well-known relations. With
regards to the fourth assumption, denoting
the mass abundance of H, He and heavier
elements respectively X, Y Z (whereas
X+Y+Z= 1) [] . The present ratio (Z/X) of
heavy elements (metallic) to the hydrogen
in the atmosphere is (Z/X)photo = 0.0245 (1±0.061) which was derived with meteoritic analysis
and solar spectroscopy .
If complete information about the initial photospheric composition was available and
the theory of stellar models was capable to predict the solar radii firmly, then there would be
no free parameter. The investigation model should account for the solar luminosity and radius
at the solar edge without disturbing any parameters. The standard solar model must have the
Sun's luminosity and radius at the Sun's age. The accuracy of the mass determination depends
directly on the determination of G. The mass of the Sun is 1.9891E33 g with a relative
uncertainty of ±0.02% . The radius of the
Sun for stellar structure calculations is
defined at an optical depth of tau = 2/3, and
is known, through transit and eclipse
measurements to be 6.96E10 cm at tau =
0.001 with an error of ±0.01%. Here, as
noted by Ulrich and Rhodes, it is important
to translate the glancing angle measurements
at the Sun's limb to determine the Sun's
radius to a tau = 2/3 optical depth measured perpendicular to the Sun's surface. The
luminosity is determined from solar constant measurements from space. This luminosity of
the sun (Lo) depends in a rather sensitive way on the initial helium abundance Y and the
metal abundance Z. Since the ratio (Z/X) is constrained by observational data as Y & Z can
be chosen independently, if Y increases, Z must decrease. ERB-Nimbus 7 measures solar
irradiance of 1371.0±0.765 W/m² and SMM/ACRIM measures. The luminosity of the Sun is
3.846E33 erg/s from SMM/ACRIM and 3.857E33 erg/s from ERB-Nimbus 7. Taking the
average of the two and setting the error equal to the difference between them, one obtains Lsun
= 3.8515±0.011E33 erg/s. Here it is well known that the mass of The sun M0 =
(1.98892±0.00025)×1033 gm and radius R0= (6.9598 ±0.0007) ×1010cm luminosity (L0) i.e.
produced by the sun is =4.844(1 ± 0.001)× 1033 erg sec-1.
The age of the Sun is inferred from the ages of the oldest meteorites, although, the
age is commonly quoted as being 4.6 Gyr or 4.7 Gyr. Guenther notes that the latest
determinations of the ages of the oldest meteorites sets the age of meteoritic condensation at
4.56 Gyr which revises his earlier age estimate. The new best estimate of the Sun's age is now
4.52±0.04. So, in order to produce a standard solar model, one must study the evolution of a
homogeneous solar mass upto the solar age. To get the measurement of the Radius, the
efficiency of convection must be conserved as it dominates the energy transport in the outer
layer of the sun.
The process “mixing length theory” explains that if the mixing length l as the
distance over which a moving unit of gas can be identified before it mixes appreciably. This
1
length l is related to the pressure scale being Hp= through l =α Hp, Where α is
( d ln ln P/dr )
independent radial coordinate and it is used as free parameter. By varying α, mixing length
can be measured. Thus solar radii were determined by efficiency of convection. If α is
increased convection becomes more efficient, temperature gradient smaller and the surface
temperature higher. Above description establishes that a standard model has three essential
parameters α, Y, (Z/X) .
Since, many studies have been dedicated to finding the volatility trends of the condensing
elements, which are expressed by the condensation temperature of an element and its
compounds. More recent studies have become quite detailed in investigating the
condensation of major rock-forming elements under various potential nebular conditions,
such as different dust-to-gas ratios.

: Mean molecular weight decrease in the outer


region of the star,( the luminosity of a stellar
model is very sensitive to the mean molecular
weight(i.e. proportional to μ7.5 ) )

The abundances of most of the elements can be directly determined from the photospheric
spectrum of the Sun. It is assumed that these abundances are identical to the abundance of the
elements in the zero age models. Neon and argon abundances are adopted from their
measured abundances in the solar corona, solar winds and nebula (Meyer 1979). Helium, as
the second most abundant element in the Sun, is left as a free parameter of the standard solar
model. The abundance of helium is adjusted to produce a solar model with the Sun's
luminosity. Recent results from standard solar models considering helium and heavy-
element settling imply on to protosolar abundances in where, considering settling effects one
can derive protosolar mass fractions are X0
¼
— 0:7110, Y0 ¼
— 0:2741, and Z0 ¼

0:0149.
New values for C, N, O, Ne and Ar
abundances calculated using three-
dimensional rather than one-dimensional atmospheric models, including hydrodynamical
effects, and uncertainties in atomic data and observational spectra. New estimates of the
abundance, together with the previous best estimates for other solar surface abundances ,
incurred a ratio of heavy elements to hydrogen by mass of Z/X = 0.0176, much less than the
previous
value of Z/X
= 0.0229 .
The
most
problematic
and
important
source of
uncertainties
belongs to
the surface
composition
of the Sun.
Systematic
errors
dominate
the effects
of line
blending,
departures from local thermodynamic equilibrium and details of the model of the solar
atmosphere. But it was assumed in the calculation that the uncertainty in all-important
element abundances is approximately identical.
Since the significance of a disagreement with the Standard Solar model is of great
importance. Unsolved solar neutrino problem has given birth to the series of ad hoc
“nonstandard” solar models by changing the solar model in which profound care was
considered in order to lower the calculated rate of the 8-B neutrino flu Over the past two
decades, the most often hypothesized change is some form of mixing of the solar material
that reduces the central temperature and therefore the important 8B neutrino flux. Previous
arguments that extensive mixing does not occur are theoretical, including the fact that the
required energy is 5 orders of magnitude larger than the total present rotational energy. Thus,
these scientists were able to establish a new nonstandard solar Model adjusting the free
parameter to reduce the calculated 7Be flux more than the 8B flux. The calculated neutrino
fluxes depend upon the central temperature of the solar model approximately as a power
(which varies from almost 1.1 for the p-p neutrinos to 24, for the 8B neutrinos) of the
temperature. Similar temperature scaling is found for nonstandard solar models. Within three
years 2001 to 2003, scientists solved a mystery with which they had been struggling for four
decades. The solution turned out to be important for both physics and for astronomy. During
the first half of twelfth century, the physicist believed that conversion from hydrogen to
helium is the reason of solar luminosity.
This conversion process can be written schematically as
4 p → 4 He +2 e +¿+2ϑ e ¿

They explain their theory as the four hydrogen ion or proton form the nucleus of a helium
atom (He), two positive electrons(e +¿¿ ) and two mysterious particles named neutrinos(ϑ e).
Two neutrinos are produced for each time fusion reaction occurs and as 4 proton in total is
heavier than the product of reaction so lot of electromagnetic energy or sun luminosity
produced that actual sun’s shining. Those neutrinos have zero electric charges, huge
penetration ability, and of course massless. Nuclear fusion in the sun’s interior produces the
neutrinos with electrons. So they are called as electron neutrinos (ϑ e). Except the electron
neutrinos, two other neutrinos named tau neutrinos (ϑ τ ) and moun neutrinos (ϑ μ) are
produced. in 1964, Nobel laureate Raymond Devis Jr. experimented and found neutrinos
calculating neutrinos produced radioactive argon 37Ar within chlorine-based cleaning fluids.
The number of neutrinos was found in this experiment very much lesser than the expected
value. So far few neutrinos were not found. After the discovery of a smoking gun , the
difference between the total number of neutrinos and electron neutrinos is easily explained.
Ultimately this missing neutrino was experimentally solved in Sudbury Neutrinos
Observatory, Ontario, Canada in 2001, 18th
June after using 1000 metric-ton ultra-pure
D2O. Measurement of the total number of
neutrinos n, this SNO detector provided the
fingerprint of the smoking gun. The
neutrinos have been created as electron
neutrinos but they change themselves in
types in the way to the earth. This conversion process is known as quantum mechanical
process or neutrino oscillation. In 1990, December, experimented the Kamiokande III and
showed no anticorrelation between the sunspot number and 8B neutrino flux data obtained
from the experiments of Superkamiokande II and III.
In the SSM neutrino flux from the sun was calculated on the assumption of Lneutrino (neutrino
luminosity) = Lg (optical luminosity), which implies that if there is a change in the optical
luminosity then solar neutrino flux will also be changed i.e., neutrino flux will be variable
within the solar cycle. In this connection, it may be mentioned that a perturbed solar model
can account not only the neutrino flux variability but also the solar irradiance variability
within the solar cycle and effects of solar flares. The solar neutrino flux detected by
Homestake, Kamiokande- Superkamiokande, SAGE, GALEX-GNO detectors are variable in
nature. The periodicity of the solar neutrino flux is also compatible with the periodicity of the
other solar activities i.e., sunspots, solar flares, solar proton events, solar irradiances (E>10
MeV) etc.
4. SOLAR PHENOMENOLOGY
Solar Activity Indices: The detailed study of active regions of the Sun is the prime importance of
astrophysics research. The formation and decay of this magnetic field in the Sun’s atmosphere is
responsible for the different solar activity indices like Solar Radio Flux, Solar Flare, Coronal Mass
Ejection, Solar Eruption, Sunspot etc. According to the Solar dynamo theory, the dynamo process
operates on the dynamics of the magnetic field inside The Sun//

So, analysis of different solar activity indices is most important to examine Solar Physics as well as
the Solar-Terrestrial relationship which is the fundamental component of the solar astrophysics. Some
of the important solar activity indices are listed below:

● Sunspot Number (SSN): The Solar dynamo process is mainly governed by the fluctuation of
the internal magnetic field. Several methods are present to measure that dynamo process. The
sunspot number is one of the primary indicators among them to quantify the strength of that
dynamo process (Waldmeier, 1962). The sunspot is nothing but a dark spot on the
photosphere region of the Sun due to the resultant magnetic field. The mean diameter and
temperature of the sunspot is around 37,000 km and 4,600 k respectively. During 19 th century,
Rudolf Wolf first initiates the knowledge of sunspot number at Zurich observatory which is
also known as Wolf or Zurich Sunspot Number and expressed as
R Z =k ( 10 g +n ) ( 1.8 )

Where g indicates the sunspot group number, n indicates sunspot number in an individual
sunspot group and k is the correction factor whose value mainly depends on methods of
observation, as well as instrument, used to measure (Usoskin & Mursula, 2003a; 2003b). A
zero sunspot number indicates the bright Sun i.e. no dark spots in the Sun’s outer surface. The
structure of a typical sunspot is shown in Fig. 1.6. where the black spot at the centre is known
as Umbra and another dark spot is called Preunbra.
Fig. 4.1: The pictorial view of a typical Sunspot (Image courtesy: NASA)
The periodic variation of that sunspot was detected by Schwabe during 1843 and recognized
as Solar Cycle or Schwabe Cycle. The length of that cycle is computed by taking the
difference between two consecutive solar minimums. The mean length of a solar cycle is 11
years but it can vary from 9 to 12 years. Hale introduced 22 years of the period due to
magnetic reversal, which named as Hale Cycle. Another period of 78 years is known as
Geissberg Cycle. The sunspot number from 1755 to 1766 is considered as Solar Cycle 1. The
details of each Solar Cycle is listed in Table 1.1.
● Sunspot Area (SA): The net total area of each and every sunspot on the Sun’s surface is
known as the Sunspot area. Like sunspot number, this index is also treated as a physical
parameter to describe the solar activity as it relates with emerging magnetic field around the
sunspot. This index is calculated
in ppm (parts per million) from
the sunspot images with 672.3 nm
resolution. The daily value of this
index is computed by the Royal
Greenwich Observatory based on
their images captured at
Greenwich, England from 1874.
The sunspot area is also
calculated from the observation at
Kodaikanal, India.
Table 4.1: The details of each and every Solar Cycle

Solar Cycle Starting Date Ending Date Period Maximum Value


of SSN
1 03 / 1755 06 / 1766 11 years 04 months 86.5 (06 / 1761)
2 067/ 1766 06 / 1775 09 years 115.8 (09 / 1769)
3 07 / 1775 09 / 1784 09 years 04 months 158.5 (05 / 1778)
4 10 / 1784 05 / 1798 13 years 08 months 142.0 (02 / 1788)
5 06 / 1798 12 / 1810 12 years 07 months 49.2 (02 / 1807)
6 01 / 1811 05 / 1823 12 years 05 months 48.7 (04 / 1816)
7 06 / 1823 11 / 1833 10 years 06 months 71.7 (11 / 1829)
8 12 / 1833 07 / 1843 09 years 09 months 146.9 (03 / 1837)
9 08 / 1843 12 / 1855 12 years 05 months 131.6 (02 / 1848)
10 01 / 1856 03 / 1867 11 years 03 months 97.6 (02 / 1860)
11 04 / 1867 12 / 1878 11 years 09 months 140.5 (08 / 1870)
12 01 / 1879 03 / 1890 11 years 03 months 74.6 (12 / 1883)
13 04 / 1890 02 / 1902 11 years 11 months 87.9 (01 / 1894)
14 03 / 1902 08 / 1913 12 years 06 months 64.2 (02 / 1906)
15 09 / 1913 08 / 1923 10 years 00 months 105.4 (08 / 1917)
16 09 / 1923 09 / 1933 10 years 01 months 78.1 (04 / 1928)
17 10 / 1933 02 / 1944 10 years 05 months 119.2 (04 / 1937)
18 03 / 1944 04 / 1954 10 years 02 months 151.8 (05 / 1947)
19 05 / 1954 10 / 1964 10 years 06 months 201.3 (03 / 1958)
20 11 / 1964 06 / 1976 11 years 08 months 110.6 (11 / 1968)
21 07 / 1976 09 / 1986 10 years 03 months 164.5 (12 / 1979)
22 10 / 1986 05 / 1996 09 years 08 months 158.5 (07 / 1989)
23 06 / 1996 12 / 2008 12 years 05 months 120.8 (03 / 2000)
24 01 / 2009 Till Date Continue 81.8 (04 / 2014)

● Solar Radio Flux (F10.7): The radio frequency emission at 10.7 cm (or 2800 MHz)
wavelength among all other wavelength is selected because of its high correlation with solar
extreme ultraviolet radiation as well as complete and long observational record other than
sunspot related indexes . The 10.7 cm solar radio flux data originates in the lower corona and
chromospheres region of the Sun. The mutual relationship between Sunspot number and solar
radio flux at 10.7 cm was discovered by Floyd et al. (2005) which are stable for 25 years. But
during solar cycle 23, this
relationship was changed
due to Gnevyshev Gap
(Bruevich et al., 2014). A
nonlinear correlation
between Sunspot number
and radio flux at 10.7cm
was analysed during solar
cycles 18-20. To attain the
best linear approximation,
two methodologies were
proposed by Vitinsky et al.
(1986): (1) for lower radio flux (F10.7< 150 sfu) and (2) for higher radio flux (F10.7> 150
sfu). [1 solar flux unit (sfu) = 10-22 W m-2 Hz-1]. There are two major factors of the 10.7 cm
solar radio flux emission: one is rotationally modulated, and another is unmodulated. The
unmodulated solar radio flux emission originated from outside active regions. The prime
reason behind that is mostly for thermal bremsstrahlung, which dominates the Sun during
solar minimum. The rotationally-modulated 10.7 cm (or 2800) MHz solar radio emission is
responsible mostly for the emission of thermal gyroresonance which is originated from the
magnetic fields above sunspots. The solar radio flux at 10.7 cm has a higher degree of
correlation with all other solar phenomena which indicates the dependence among different
plasma parameters and their sources. So, solar radio flux is a magnificent indicator of major
solar activity .Also, solar radio flux at 10.7 cm wavelength plays a very valuable role in
forecasting space weather because it is originated from the lower corona and chromospheres
region of the Sun. Detailed research work about solar radiation model discovered that the
principal characteristic of energy particle spectra contributes to solar radio flux .
The daily F10.7 index data is measured on 2800 MHz centre frequency with a bandwidth of
100 MHz using two automated radio telescopes named flux monitors at Dominion Radio
Astrophysical Observatory in Penticton, British Columbia.
● Mg II core-to-wing Ratio (MgII): Mg II core-to-wing ratio is a good solar activity index in
terms of solar radiation. Mg II core-to-wing ratio has a good correlation with solar extreme
UV emission than 10.7 cm solar radio flux. It can be utilized as a proxy other than 10.7 cm
solar radio flux to model the solar extreme UV emission during several solar cycle (as the
correlation between Mg II core-to-wing and 10.7 cm solar radio flux data is around 0.99 over
the entire data set. The US Air force has also shown that the long-term RMS error is reduced
by 20 – 40% in their satellite by replacing 10.7 cm radio flux data with Mg II c/w ratio data.
Mg II c/w ratio is being measured by various instruments of ESA and NASA satellites like
SUSIM, SOLSTICE, GOME, NOAA9, NOAA11 etc. at different time scales. The
correlations between different data sets from those instruments appear as around 0.986 to
0.996. Those time series data were merged to construct a single time series using linear
scaling. Mg II index is very similar to the Ca II K index, as this index is formed by taking the
ratio of the core line (near 2800 Å) to wing line (near 2767 and 2833 Å) emission. But to
measure the solar variability from the chromosphere region of the Sun, Mg II c/w ratio plays a
significant role as h and k lines of Mg II index is very much stronger than Ca II line).
Actually, the resonance lines of Mg II are formed at plasma
temperature around 10000−15000 K and they deliver important
information from the photosphere out to the higher parts of the
chromospheric plateau. Mg II lines effectively represent
prominence-to-corona transition region (PCTR) between the cool
and dense core and the hot corona.Thus a continuous composite
single time series Mg II c/w ratio is very much important to the
scientific community to derive solar extreme UV emission.
● Solar Flare (SF): A solar flare is an intense and rapid brightness variation in the Sun’s
atmosphere. The sudden release of magnetic field energy from the solar atmosphere due to
reconnection of the magnetic field in the coronal region is the main source of solar flare
(Sturrock, 1968; Benz, 2017). The emitted radiation during a flare varies in a wide range of
electromagnetic radiation spectrum from radio to X-ray and gamma rays. The total energy
released from a solar flare is around 1032 ergs, which is millions of 100 megatons of hydrogen
bombs equivalent energy. R. C. Carrington and R. Hodgson first recorded the solar flare in
the white light images of the Sun in 1859. The solar flare is described by a complex process
both in spatial and temporal domain ranging from chromosphere region to corona of the Sun ‘
During the flaring activity, solar energetic particles such as protons, electrons and heavy
nuclei are also released and accelerated towards the solar atmosphere. Those flares can last
either for a few hours or for few seconds. The effect of a solar flare on the technological
instrument in space is much more important as compared to other solar activity. At the same
time, it also affects the living things on the Earth. Although the impact of associated coronal
mass ejection is minimized by the magnetic field of the Earth, but the associated X-ray flare
emission disturbs the ionosphere region of the Earth and the UV radiation increases the
temperature of the Earth’s atmosphere. So, the solar flare is considered as one of the most
prominent research topics in the domain of solar physics as well as space weather. A bright
Solar flare in the solar active region is shown in Fig 1.7. The concept of Solar Flare Index was
first discovered by Kleczek (1952) as FI = I×T which is roughly proportional to the net
emitted flare energy. In the above relationship, I symbolize the scale of intensity expressed in
Table 1.2 and T represents the time span (in a minute) of flare in Hα alpha flux.
Classification of the solar flare index is made on the basis of their brightness and importance.
The importance is categorised as 1, 2, 3 or 4 according to size of the flare expressed in Table
1.3 whereas brightness is defined as B= bright, N = normal and F = faint in terms of emission
intensity. The calculated data sets are available at the web page of Kandilli Observatory
(http://www.koeri.boun.edu.tr/astronomy) as we as National Geophysical Data Center
(NGDC) (ftp://ftp.ngdc.noaa.gov/STP/SOLAR_DATA).
Fig 4.2: A bright solar flare in the active region of the Sun (Image courtesy: NASA)

Table 4.2: The scale of intensity of flare in Hα alpha flux

Importance I Importance I
SF, SN, SB 0.5 2B 2.5
1F, 1N 1.0 3N, 3F, 4F 3.0
1B 1.5 3B, 4N 3.5
2F, 2N 2.0 4B 4.0

Table 4.3: The importance according to Size of the flare in Hα alpha flux

Importance Flare Area (Actual*) Flare Brillance (Apparent*)


S A < 2.0 A < 200
1 2.1 < A < 5.1 200 < A < 500
2 5.1 < A < 12.4 500 < A < 1200
3 12.5 < A < 24.7 1200 < A < 2400
4 A > 24.7 A > 2400
*Actual area is defined in terms of per unit degree square and apparent area is defined as the millionths
of the total solar disk.
● Coronal Mass Ejection (CME): A frequent and sudden eruption of plasma from the coronal
region of the Sun is commonly known as Coronal Mass Ejection (Low, 1996). The CME has
a scientific interest due to its effect on space weather as well as Earth (Baker et al., 2008). The
first CME was captured by OSO-7 using on-board coronagraph technique during 1970. Later
CMEs are observed with longer duration and improved qualities using different spacecraft
like Skylab during 1973 to 1974, P78-1 during 1979 to 1985, SMM during 1984 to 1989,
SPHERICAL ASTRONOMY
Coordinate Systems Astrometry is the field of astronomy dedicated to measurements of the angular
separations between stars, where the stars are considered to be incrusted on a sphere of unit radius –
the Celestial Sphere. Nowadays the goals of Astrometry include the determination of fundamental
reference systems, accurate measurements of time, corrections due to precession and nutation, as well
as determination of the distance scales and motions in the Galaxy. Because astrometry deals with arcs,
angles and triangles on the surface of the celestial sphere, whose properties differ from those of
Euclidean geometry, let us examine the basic definitions of geometry used on the surface of a sphere.

From these definitions we shall detail the relations that connect the constituent elements of a
spherical triangle, the main object of study of spherical trigonometry, the mathematical technique
used in the treatment of observations and whose fundamental concepts are presented in this chapter.
We shall call Spherical Astronomy the area of astronomy that involves the solution of problems on
the surface of the celestial sphere. One of the main applications of the formulae of spherical
astronomy is to obtain the relations between the several coordinate systems employed in astronomy.

Selecting a
coordinate system depends on the problem to be resolved, and the transformations between the
systems allow the measurements done in one system to be converted into another. These
transformations can be obtained using either spherical trigonometry or linear algebra. Another
interesting application are the transformations from one coordinate system to another, with origin on
the center of Earth, into a coordinate system centered on other planets, spacecraft, or the barycenter of
the Solar System, which is specially useful for the study of positions and motion of objects in the
Solar System.

Basic Definitions : In spherical astronomy


we consider stars as dots on the surface of a
sphere of unit radius. A sphere is defined as
a surface where all the points are equidistant
from a fixed point; this two-dimensional
surface is finite but unlimited. On this
surface we use spherical geometry, which is
the part of mathematics that deals with
curves that are arcs of great circles, defined
below. To start with, let us define some of the basic concepts of spherical geometry, which di↵er
from the concepts of Euclidian geometry, applicable to plane two-dimensional surfaces.

• The intersection of a sphere with a plane is a circle.

• Any plane that passes through the center of the sphere intercepts the sphere in a Great Circle.

• Any circle, resulting from the intersection of the sphere with a plane that does not pass by the

cent er
, is called a small circle
O

A
B C
A

\
P
B’
A’
O
A B

Figure: The spherical angle C is the dihedral angle between the planes that cross the sphere at the arcs
AP and PB. It is also defined as the angle between the tangent lines to the arcs at the intersection point
P.
Corollary
For a sphere of unit radius, the arc of great circle connecting two points on the surface is equal to the
angle, in radians, subentended at the center of the sphere (Fig. 1.3). This comes directly from the
definition of length on a circle. The length of the path c is c = Rψ , where ψ is the angle AOˆB in the
figure. If R = 1, then and c have the
same units, and c = ψ.
Poles
The poles (P, P0) of a great circle are intersections of the diameter of the sphere, perpendicular to the
great circle, with the spherical surface. The poles are antipodes (diametrally opposed points), i.e., they
are separated by arcs of 180#.
Spherical angle
The angle generated by the intersection between two great circle arcs is the angle between their planes
(dihedral angle), and is called spherical angle (Fig. 1.4). The spherical angle can also be defined as
the angle between the tangents (PA’, PB’) to both arcs of great circle in their intersection point
.Elements: The sides of the spherical angle are the arcs of great circle and, in their intersection, we
have the vertex. In the figure, the sides are PA and PB, and the vertex

Coordinate Systems
Fundamentals Glossary
Zenith: point directly overhead of the observer.
Antipode: point diametrically opposed to another on the surface of a sphere.
Nadir: antipode of the zenith.
Meridian: North-South line, passing by zenith.
Diurnal motion: the east-west daily rotation of the celestial sphere. A star path in diurnal motion is a
small circle around the celestial pole.
Culmination: the meridian passages of a star in its diurnal motion. There are upper and lower
culminations.
Celestial equator and Ecliptic: two fundamental great circles defined in the celestial sphere. The
celestial equator is the projection of Earth’s equator on the celestial sphere. The ecliptic is the
projection of the Earth’s orbit in the celestial sphere. From our point of view, it is the path traced by
the annual motion of the Sun in the sky.
Equinox : from Latin aequinoctium, equal nights. The instant when the Sun is at the celestial equator,
thus day and night have equal duration.
Solstice : from Latin solstitium, sun stop. The instant when the Sun reverts its north-south annual
motion, defining either the longest or shortest night of the year.
Vernal Point: one of the intersections between the celestial equator and the ecliptic. Also called
Vernal Equinox, or First Point of Aries. The antipode of the vernal point is the Autumnal Point, also
called Autumnal Equinox or First Point of Libra.
Sidereal: with respect to the stars (from Latin sidus, star). A sidereal period refers to the time taken
to return to the same position with respect to the distant stars.
Synodic: with respect to alignment with some other body in the celestial sphere, typically the Sun
(from Greek sunodos, meeting or assembly). A synodic periodtipically refer to the time taken to return
to the same position with respect to the Sun. For the Sun, another reference in the sky is used
(generally the meridian for the synodic day, or the vernal point for the synodic year).

Horizontal Coordinate System


Fundamental plane: local horizon
Coordinates: azimuth A (usually measured from the south point, westwards) and altitude h (measured
from horizon to zenith).
The azimuth is sometimes measured from north, increasing eastwards. This is the convection in
nautics. In astronomy, it is more convenient to measure from the south as then a culminating between
the pole and the south cardinal point (as most stars do seen from the northern hemisphere) will
culminate with zero azimuth. Notice that in the southern hemisphere the opposite (measuring from the
north) would be more advantageous as most stars will culminate between the pole and the north
cardinal point

The zenital distance z = 90 − h is often used instead of altitude. Stars at same altitude define an
almucantar. An almucantar is a small circle parallel to the horizon.

Fundamental plane: Celestial equator


Coordinates: right ascension α and declination δ.
The declination δ is measured as a perpendicular from the celestial equator to the star. The right
ascension is measured eastwards from the vernal point. The vernal point is one of the intersections of
the celestial equator with the ecliptic, which is the path traced by the annual motion of the Sun in the
sky.
Hour Coordinate System
Fundamental plane: Celestial equator.
Coordinates: hour angle H (measured from meridian) and declination δ (same as in the equatorial

coordinates)..
Hour coordinate system. The hour angle is measured from the meridian to the hour circle of the star.
The hour angle is zero when a star culminates and increases from 0h to 24h. Stars in the same hour
circle have the same hour angle.

Spherical triangle
A spherical triangle is the figure formed by arcs of great circle that pass by 3 points, connected by
pairs, that intercept at the surface of a sphere. A Eulerian spherical triangle has each side and angle
less than 180#.
Corollary 1: Three points that do not belong to the same great circle define a plane that does not pass
by the centre of the sphere.
Corollary 2: The sphere can be divided in a way that the 3 points are always in the same hemisphere.
So, the length of each angle in the spherical triangle cannot be more than 180 O.
Corollary 3: A spherical triangle has only great circle arcs. It cannot be formed by arcs of small
circles.
The spherical triangle has 6 elements: 3 angles usually referred to by capital letters (ABC) and 3
sides, opposed to the angles, referred to by lowercase letters (BC = a, CA = b, AB = c).
The vertices of the spherical angles are the
vertices of the spherical triangle. The sides
(AB, BC, CA) are the arcs of the three great
circles. The angles (A, B, C) are measured by
the dihedral angles.
Properties
The following properties are valid for Eulerian
triangles, i.e., those for which each side
or angle does not exceed 180#.
1. The sum of the three sides of a spherical
triangle is between 0O and 360O (2π).
0 O
< a + b + c < 360
2. The sum of the three angles of a spherical
triangle is greater than 180O (π) and smaller than 540O (3π) 180O < A + B + C < 540O
3. One side is greater than the di↵erence of the two others and smaller than the sum of two other sides.
|b − c| < a < b + c
4. When two sides are equal, the two opposite angles are also equal and vice-versa. a = b <=>A = B
5. The order in which the values of the sides of a spherical triangle are distributed is the same in
which the angles are distributed a < b < c <=> A < B < C
5. INSTRUMENTATION
Telescope
Click here to remember lense operation
" https://www.youtube.com/watch?v=EL9J3Km6wxI
Telescope category HYPERLINK "https://www.youtube.com/watch?v=_v1RWyzQAng"
https://www.youtube.com/watch?v=_v1RWyzQAng
Https://Www.Youtube.Com/Watch?V=_V1rwyzqang

Introduction The night sky always attracted people by its charming mystery. Observers had been
using naked eyes for their explorations for many centuries. Obviously, they could not achieve a lot
due to eyesight limitations. It cannot be estimated, how important the invention of telescopes was
for astronomers. It opened an enormous field for visual observations, which had lead to many
brilliant discoveries. That happened in 1608, when the German-born Dutch eyeglass maker had
guessed to combine several lenses and created the first telescope [PRAS]. This occasion is now
almost forgotten because no inventions were made but a Dutchman. His device was not used for
astronomical purposes, and it found its application in military use. The event, which remains in
people's memories, is the Galilean invention of his first telescope in 1609. The first Galilean optical
tube was very simple, it could only magnify objects three times. After several modifications, the
scientist achieved higher optical power. This helped him to observe the venusian phases, lunar
craters and four Jovian satellites. The main tasks of a telescope are the following:

• Gathering as much light radiation as possible


• Increasing an angular separation between objects
• Creating a focused image of an object
We have now achieved a high technical level, which enables us to create colossal telescopes,
reaching distant regions of the Universe and making great discoveries.
Telescope components:

The main parts of which any telescope consists of are the following:
• Primary lens (for refracting telescopes), which is the main component of a device. The bigger the
lens, the more light a telescope can gather and fainter objects can be viewed.
• Primary mirror (for reflecting telescopes), which carries the same role as the primary lens in a
refracting telescope. • Eyepiece, which magnifies the image.
• Mounting, which supports the tube, enabling it to be rotated. Telescopes can be divided into two
main categories: refractors and reflectors. Refracting telescopes

The history of the refractor’s inventions was discussed in the introduction part. Refractor is the
simplest type of telescope, which combines two lenses at the ends of a tube. As mentioned above,
the main component of this type of telescope is the primary lens – the objective. A concave lens is
used as an objective of a refractor. It defines how faint objects can be viewed. The eyepiece, another
concave lens, is placed at the other end of the tube. It defines the magnification of an object
observed. The appearance and a schematic view of refracting telescope are shown in Figure below..

Reflecting telescope This is a later type of telescope, firstly proposed by Giovanni Francesco Sagredo
(1571 – 1620) who suggested that a curved mirror could be used instead of a lens. The first reflecting
prototype was created by Isaac Newton in 1669. These telescopes employ parabolic mirrors to
gather light from distant space sources and several auxiliary mirrors that direct the path to the
eyepiece. The appearance and a schematic view of the reflecting telescope (Newtonian design) are
shown in Figure below.
Principle of working and main parameters The light from a source reaches the primary lens (or
primary mirror), converging at a focal point of a system. Depending on the design, the light either
enters the eyepiece (refractors) or is reflected outside the main tube towards the ocular lens
(reflectors). The angular resolution is defined by the diameter of the main lens or mirror, which is
referred to as an aperture D, measured in millimetres. For the visible light, this limit is described by
the following formula:

Sin α = 138/ D ,

Where α is the limit resolution measured in arcseconds.

Each lens or mirror has their special feature known as focal length. If the focal length of the objective
is F and the focal length of the eyepiece is f, then the magnification of the image produced by the
optics is described by the following simple formula:

Μ = F/ f

As mentioned before, the bigger the aperture, more light the optics can gather. More light enters
the telescope, the fainter objects can be viewed. The limiting magnitude of a telescope can be found
according to the following formula:

M L ≈ 7.2 + 5log d , Where d is the aperture diameter in millimetres.


Another important parameter of a telescope is the focal ratio. It is defined as the focal length of the
objective divided by its diameter. The smaller the ratio, the “faster” the optical system is and the
better and brighter image can be produced.

Mountings There are two main classes of mount: altitude-azimuth and equatorial. Telescopes with
the first mount are aligned with the local zenith. Experiencing symmetrical gravity forces, the
telescopes are normally less massive. The disadvantage is that the device should be moved in two
directions simultaneously in order to track the object. The equatorial mount is aligned with the polar
axis. It requires a heavy counterweight to balance the telescope. The advantage of this mount is that
the tube has to be rotated along one polar axis only for following the sky. This configuration is much
more convenient for tracking objects.

The quality of the views produced by optical telescopes is limited by optical defects - aberrations.
Refractors suffer from chromatic aberration more than from other deviations. This is the result of
the difference of the speed of light of various wavelengths in the medium. Red light is refracted
more than blue one.

As a consequence of this effect, a star is rounded by colourful concentric rings of light. A corrective
lens is placed on top of the primary objective to reduce chromatic aberration. Other types of
aberrations are monochromatic. They are spherical, coma, astigmatism, distortion and field
curvature. Spherical aberration results from the special spherical surface feature – it focuses on a
line, not a point. Therefore, rays from the centre of a mirror focus farther from it than rays from the
edges. This is fixed by employing parabolic mirrors. The rest of the aberrations are off-axis, they
depend on the field angle.
Space Flight Particle Instrument
A Space Flight Particle Instrument is a type of sensor or detector that is used to measure the
presence and properties of particles in the space environment. These instruments can be used
on a variety of space-based platforms, including satellites, space probes, and manned
spacecraft. Some examples of space flight particle instruments include:

Cosmic ray detectors: These instruments measure the energy and charge of cosmic
rays, which are high-energy particles that travel through space.

Solar particle detectors: These instruments measure the energy and charge of particles
emitted by the sun, such as solar flares and coronal mass ejections.

Plasma instruments: These instruments measure the properties of the charged particles
that make up the solar wind and other space plasmas, such as temperature, density,
and velocity.

Magnetometer: This instrument measures the strength and direction of the magnetic
field in space.

Dust detector: This instrument measures the presence and properties of dust particles
in the space environment.

Charged particle detector: This instrument measures the presence and properties of
charged particles such as electrons and ions in space.

These instruments are important for understanding the space environment and the effects of
solar activity on the Earth's environment. They also help to protect spacecraft and astronauts
from the hazards of space radiation.
A typical space flight particle instrument consists of several elements (Figure below).

Which of the elements are present depends on the particular instrument technique and
implementation. First, there invariably is a collimator or gas inlet structure. This structure
essentially defines the field of view and shields the subsequent sections from unwanted stray
particles, photons and penetrating radiation. Following this section, in neutral particle
instruments only, there is an ionization element to convert the neutrals into ions that are
amendable for further analysis by electromagnetic fields or, when appropriate, by time-of-
flight techniques. After the collimator and ionization sections, there is an initial analyzer,
such as a solid-state detector or an electrostatic analyzer, where the charged particles are
filtered according to their energy per charge. This may be followed by a second analyser
section that performs ion mass discrimination. Finally, the particle encounters a detector that
converts the arrival of the particle (and often its energy) into an electric signal that can be
further processed in a signal processing section. The resulting digital data is passed to the
spacecraft data handling system and relayed to the ground via regular spacecraft telemetry.
On the ground, the raw data is further processed to obtain physical parameters. Mission
design, spacecraft design, hardware design choices as well as software compression and
binning schemes affect the performance of the instrument. Starting from what environment
needs to be studied (density, species, characteristic velocity if any, characteristic Mach
numbers if any, temperature (T), pressure tensor, distribution function knowledge, intrinsic
time of phenomenon, boundary types and characteristic lengths) measurement requirements
are derived such as geometric factor, signal to noise, mass range, mass resolution, energy
range, energy resolution, field-of-view, energy/angle resolution, the time resolution of the
measurement, and analyser type.
In spite of the fact that space particle instruments have been constructed in a wide variety
of geometries and using many combinations of particle energy, charge state, particle mass,
and species analysis, there are in fact only a few basic techniques that exist for selecting
particles with specific properties.
These are analysis solely by static electric fields, analysis solely by magnetic fields,
analysis by combinations of electric and magnetic fields, analysis by time-varying electric
fields (sometimes in combination with static magnetic fields), analysis by determining a
particles time-of-flight over a fixed distance, and analysis by determining a particle’s rate of
energy loss through matter.
Contemporary space flight instruments almost always use either open window electron
multipliers or silicon solid-state detectors to detect those particles that are passed by the
various analyser elements. Determining the performance of such particle detectors is critical
to the overall instrument laboratory calibration because their post-launch stability is always
an important factor. Since a Faraday cup is sometimes used as an integrating current detector
in a few particle detector systems and often forms an important element in laboratory
calibration facilities, the design and operation of a Faraday cup is also discussed. The basic
principle behind each of these analysis techniques is briefly described in this chapter. Each
section on the analysis principle usually contains a more detailed description of specific
instruments in order to provide background for the material on the calibration and in-flight
performance verification of those instruments that appear in subsequent chapters. Whenever a
specific instrument involves a special feature, for example, unusual collimator design or a
process to convert a particle from a neutral to an ion, that feature is highlighted.

Error Sources
Sources of errors or uncertainties in the measurements and the derived physical quantities are
numerous. If the goal is to compute fluxes or distribution functions, then uncertainties arise from
• Uncertainties in geometric factor,
• Degradation of detector efficiency, 1.5. Error Sources 9
• Degradation of analyzer voltages,
• Out-of-band response,
• Sensitivity to solar UV,
• Dead-time effects at high count rates,
• Poor counting statistics at low incident fluxes,
• Aliasing caused by time variations in the incident particle population.
Some of these uncertainties are introduced by imperfections in design and/or calibration.

Calibration concerns primarily the total geometric factor of an instrument, including detector
efficiency, or the determination of the energy- and angle-passbands.
Any degradation in detector efficiency after calibration introduces (unknown) uncertainties if such
changes remain undetected or cannot be quantified. Similar uncertainties are introduced if the
voltages applied to define the energy- and/or angle-passbands degrade in some unknown fashion.

Any responses to solar UV, or to particles outside the primary energy-/ angle-passbands can in
principle be determined through extensive calibration, but complicate the conversion to meaningful
quantities, and thus fall more into the category of design-driven uncertainties. Dead times in the
detector or its associated electronics introduce losses in counts at high count rates that can be
calibrated out in some limited sense only. Low values of the counts accumulated per sampling interval
introduce uncertainties that have nothing to do with calibration but are a fundamental experimental
limitation. Poisson statistics guarantee that their relative uncertainty decreases as one over the
square root of the counts. A design with increased geometric factor is not a solution for this problem
if the detector would then be saturated in other, high-intensity environments. Time variations in the
incident particle distribution that occur within the accumulation time of the measurements are an
obvious source of errors as well. Speeding up the sampling solves this problem only if the statistical
error resulting from the reduced number of counts per sample remains adequate.
If the goal is to compute moments of the particle distribution function, then additional errors
arise from
• Limited energy range and/or energy resolution,
• incomplete angular coverage and/or resolution,
• spacecraft charging.
Obviously, if an important part of the incident distribution is not measured, because it falls outside the
energy- and/or angle-range of the instrument, one cannot expect the moments, for example, the
particle number density, to correctly represent the incident population. Spacecraft charging presents a
special problem. If it is such that it attracts the particles of interest (negative in case of ions, positive
in case of electrons), then it increases the energy of the incident particles. This energy increase can be
corrected for if the value of the potential is known. However, if the sign of the potential is such that it
retards the particles of interest, then there might be particles in the incident distribution that can no
longer reach the detector, with obvious consequences that cannot be corrected.
Important Characteristics of Analysers
When selecting an instrument for a particular mission or comparing different plasma
instruments certain key parameters have proven to be very useful. These are: energy or
velocity range, the field of view, velocity space resolution, and geometric factor that
determines the sensitivity and temporal resolution. Also to consider are the temporal
resolution for a two-dimensional and for a three-dimensional cut of the velocity phase space.
Equally important are resources that the instrument requires from the spacecraft such as mass,
power, size and telemetry rate. Charged particle optics makes many references to photon
optics such as spectrograph, spectrometer, fringing fields, and aberration. For example, a
cylindrical ESA is a charged particle optics analogue of the scanning spectrograph in photon
optics. But there is an important difference between charged particle optics and photon
optics: There is the interaction between optical properties and dispersion in charged particle
optics.
Detectors
There are relatively few detector types used in space physics to detect particles, either
charged or neutral. These include Faraday cup devices to measure the current associated with
charged particle distributions, windowless electron multipliers such as channel electron
multipliers (Channeltrons) and microchannel plates that may be operated in either a pulse
counting or an integrated current mode, and solid-state or scintillation detectors used for
higher energy particles. The succeeding sections discuss each of these particle detection
technologies.

Faraday Cups
Faraday cups are generally simple to construct and are fast, accurate, current collectors.
These collectors are connected directly to current measuring devices, and current
measurements as low as 10−15 A are possible with modern electrometers. Measurement
accuracy of a Faraday cup is affected by a series of secondary processes created by particles
impacting onto a cup, such as emission of secondary electrons and secondary ions or
reflection of charged particles,
collection of electrons and
ions produced near the cup
(for example produced by
ionization of the residual gas
or produced at the aperture
structure), current leakage to
the ground, formation of
galvanic elements due to the
use of different materials and
the penetration of particles
through the cup structure.
Escaping secondary electrons
are minimized by a suppressor
grid biased to about ~30 V
placed directly in front of the
collector plates or by biasing the cup together with the measuring electronics and by
geometric design where the collector plates are mounted at the end of a long high aspect ratio

tube such as a cylindrical tube.To measure very low ion currents an additional shielding
cylinder should be used to screen the Faraday’s cup from stray ions or electrons. When taking
the proper design precautions, Faraday cups are well suited for absolute current
measurements because they are not affected by the same gain degradation as channel electron
multipliers or multichannel electron multipliers.
The Rosetta/ROSINA double-focusing mass spectrometer includes a Faraday cup in that
instrument’s detector system in addition to microchannel plates and Channeltrons. The long-
term stability of the Faraday cup provides the absolute calibration for the other detectors that
may suffer degradation with time, as well as providing measurements during times of
exceptionally high fluxes. Faraday cups also serve the very important function of particle
beam monitors in laboratory calibration facilities

Discrete Electron Multiplier (https://www.youtube.com/watch?v=f61eMq4Wg4w )


Open windowed discrete dynode electron multipliers utilize the same electron multiplier
technology as the conventional photomultiplier tube although without the protective glass
envelope. The absence of a window permits the entry of low energy particles to the cathode
of these devices initiating a cascade of secondary electrons whose numbers increase from one
dynode interaction to the next. The multiplication produces a detectable signal at the final
dynode of the chain (see Figure ←). The fact that the dynode structure is not enclosed means
that either the materials chosen must be stable on exposure to air or the device is enclosed
under vacuum and exposed only when in space.

Continuous Electron Multiplier (https://www.youtube.com/watch?v=9wAa5ZK94ko )


has been developed to produce high resistance
surfaces on
glass that have
both a large
secondary
electron
production ratio
and are stable
on exposure to
air. These
materials have
formed the basis
for electron
multiplier
devices of more
compact design
than the discrete
dynode devices.
Because the
accelerating electric field, necessary to produce electron multiplication, can be distributed
uniformly along the resistive surface by application of a voltage difference, these devices
were originally known as continuous dynode electron multipliers in contrast to discrete
dynodes. The first devices using this technology that were suitable for use as particle
detectors in space were Channel Electron Multipliers (CEMs). CEMs consist of small, ~1
mm inside diameter and several cm long, capillary tubes. When several kilovolts potential is
imposed from one end to the other, a single electron produced at the low potential end will be
accelerated down the tube and, at every collision with the tube wall, will produce several
secondary electrons that continue that process. Overall gains of > 108 are possible. It was
found that straight CEMs were unstable at gains of > 10 4 because of ion feedback. Ion
feedback is caused by the cascading electrons that ionize residual gas inside the device
toward the high potential end of the devices, the positive ions then being accelerated toward
the low potential, input, where they may initiate a new cascade. To suppress ion feedback,
CEMs are curved so that any ion that is created will strike the tube wall before gaining
sufficient energy to reinitiate an electron cascade. CEMs can be fabricated in a variety of
geometries including “C” shaped, spiral, and helical and with funnel-like entrance cones to
increase particle collection area. One such configuration together with typical electrical
connections is shown in Figure below.
CEMs require a 2–4 kV bias voltage to achieve gains of 10 6 to >108 (Figure 2.5). For a fixed
voltage, the gain depends on length to diameter ratio which sets the number of secondary
electron multiplications. The gain and detection efficiency are weakly dependent on the
incident particle mass and energy above some threshold energy (Figures 2.5– 2.7). Incident
electrons require several hundred eV and ions require several keV to obtain good detection
efficiency. Uniform gain is observed for count rates whose pulse current is <10% of the
nominal CEM bias current. Operating pressures <10−5 bar are recommended with
background rates decreasing significantly as pressures drop below 10 −6mbar. For early work
on CEM efficiencies see Bennani et al. [1973], which shows the range of variability of the
energy-dependent gain for a variety of devices. CEMs are generally operated in pulse
saturated counting mode with gains ~107–108. Detector thresholds can then be set to a small
fraction of the nominal gain to eliminate dark current counts. One generally operates the
CEM 50-100 V above the knee in the counting rate plateau to prevent loss of counts due to
gain droop at high count rates. Higher bias voltages are not used to minimize background
counts and to maintain CEM lifetime. Count rates >106 can be achieved in the linear regime
with background rates <0.5 s−1. CEMs can also be operated in an analog mode where
variations in the CEM current are used to measure the particle flux rather than counting
individual events. Higher current CEMs are used to increase the dynamic range for analogue
mode. For a 40 μA bias current, a dark current of 1 pA, a gain of 6×10 6, and linear output up
to~10% of the bias current, a dynamic range of _4×106 can be achieved. The multipliers are
always baked at 250–280 _C in a vacuum after exposure to air to remove water vapour and
other contaminants. An initial burn-in procedure is used which consists of gradually raising
the high voltage while monitoring the outgassing pressure. The dark current is characterized.
After this period the multipliers are tested in an ion beam, where integrated pulse height
distributions are taken and multiplier gains are calculated. If the multipliers are satisfactory
they are then installed in the detector system. For calibration purposes, it is useful to test the
detector signal chain. Most often the detector signal chain is tested with a pulse signal
capacitively coupled into the detector signal line just after the anode.

Microchannel Plates

Microchannel plates (MCPs) are electron multipliers that are used in a variety of scientific and
technical applications, particularly in the field of vacuum and low-light-level detection. They consist
of a stack of thin glass plates that have been etched with a large number of tiny channels, typically
between 10 and 100 micrometers in diameter. These channels are typically arranged parallel to one
another and are separated by thin walls called septa. When an electron or other charged particle strikes
the surface of an MCP, it causes a cascade of secondary electrons to be emitted from the walls of the
channels, which in turn generates a much larger number of electrons. This process is known as
electron multiplication.

Microchannel plate (MCP) detectors began replacing


channel electron multipliers (CEMs) as the detector
of choice for low energy ion and electron detection in
most plasma instruments beginning in the mid-1980s.
As with CEMs, MCPs are electron multipliers
produced by voltage bias across a resistive glass tube
that generates an electron cascade through secondary
electron production. MCPs consist of an array of
microscopic glass tubes (typically 12–25 μm
spacing), hexagonally packed and sliced as thin
wafers (0.5 or 1.0 mm thick) with typical
microchannel length to diameter (L:D) ratios between 40:1 and 80:1. The wafers are treated
by high temperature (250–450 oC) reduction in a hydrogen atmosphere to
produce a resistive coating along the microchannels, and the top
and bottom surfaces are metallized (for a description of the
manufacturing technique. MCPs were developed for use in night
vision equipment by the military, but have subsequently been
replaced by CCD technology in most military applications. MCPs
are still readily available and provide compact front-end particle
or photon detection with a high signal to noise ratio allowing
individual event counting. Ack ground rates <1 cm−2s −1 can be
achieved with the limiting rate apparently due to beta decay of 40K
in the glass. As with CEMs, MCPs require operating pressures
<10−5 bar. The microchannel plate (MCP) can also be used to obtain
a spatial distribution of ions. MCP wafers (typically 0.5 mm or 1.0
mm thick) are sliced at a small bias angle (typically 8–12_) relative
to the microchannel axis. They are stacked in pairs (Chevron
configuration) or triplets (Z-stack), with adjacent wafers having
opposite bias angles (Figure 2.9) to prevent ion feedback in order
to suppress the ion feedback effect discussed in Section 2.2.3.
Typical bias voltages are _1 kV per plate and typical gains are
_1000 per plate. The bias voltage is generally chosen so the
secondary electron production at the back of the MCP stack is near
the microchannel saturation, resulting in a roughly fixed charge
per microchannel firing.

Chevron configurations produce charge pulses of ~106 e~,


which are readily detected with charge sensitive preamplifiers.
Careful attention to the detection electronics design can result in
electronic noise levels <105 e−, allowing preamplifier thresholds
well below the nominal gain. Pulse height distributions (PHDs) with
a roughly Gaussian shape and a FWHM equivalent to ~50–100% of the
peak height are typical, with the FWHM depending upon the MCP
gain (Figure below). These PHDs allow >95% of the events to appear
above threshold. The voltage required for these gains depends upon
the L:D ratio and the micropore diameter. The L:D ratio generally
sets the number of electron multiplications for a fixed bias
voltage. However, at high gains the micropores will saturate and
the saturated gain will depend on pore diameter and the number of
micropores that fire. A chevron pair of 80:1, 1 mm plates, will
typically require several hundred volts more bias than 40:1, 1 mm
plates. Discrete anodes with separate preamplifiers allow for the
highest counting rates but limit the position resolution for
detecting counts. For better charge pulse position resolution,
imaging systems utilize resistive anode . Delay line or wedge and
strip anodes which offer extremely fine position sensing,
approaching that of the microchannels. Another solution for
position resolution are the Multi-Anode Microchannel Arrays
(MAMA) detectors MAMA detectors are large arrays of pixels (e.g.
512×512) of 25 μm size, which are placed behind a curved channel
plate. These detectors have been developed for ground-based and
space-borne instrumentation. Imaging systems generally require
complex electronics that are sensitive to the MCP’s PhD. Imaging
systems typically allow count rates of 105–106 counts per second,
depending upon the resolution desired.
For discrete anodes, care should be taken to characterize the
response near anode boundaries. Depending upon the separation
between anodes, the MCP to anode gap, the nominal MCP gain, and
preamplifier threshold, double counting or missed counts may
occur as the exiting charge is split between anodes. A similar non-
uniform response can also arise from obstructions at the analyser
entrance or exit. Generally, these non-uniformities will have
little or no impact on the measurements unless the particle beam
is extremely narrow in angle and falls on a small portion of the
detector. As part of the test and calibration procedures, it is
important to characterize the instrument’s response to a high
counting rate. In particular, the instrument’s lost counts at a high
counting rate may depend on both the preamplifier dead time
(which may depend upon MCP gain) and upon the MCP PHD droop at
high counting rates (which can result in loss of counts below
threshold). Generally, MCP droop is to be voided since it does not
allow for a simple dead-time correction algorithm. A general rule
of thumb is to try to keep the average charge pulse current at the
highest counting rates (using nominal gain) to less than 20% of the
MCP current. An important part of the calibration process is
determining the MCP detection efficiency for particle counting,
which is dependent upon the angle, energy and mass of the
particles that strike the detector. Angle and energy efficiency
variations have been characterized. To produce an instrument with
a relatively uniform response to input particle flux, the front of
the MCPs is voltage biased to accelerate incoming particles.
Typically the full bias voltage (_−2 kV) is applied to the front of
ion detectors to assure adequate efficiency, and several hundred
volts pre-acceleration is common for electron detectors. To
minimize angle efficiency variations, and especially to avoid
particles striking the MCPs at angles aligned with the
microchannels, instruments should be designed with knowledge of
bias angle effects

Energy Loss of Particles in Matter


The process of energy loss of particles in matter is important in trying to understand the
response of sensors to high energy particles. Heavy charged particles, such as protons,
interact with the material they are traversing by a series of distant collisions with the
electrons in the material. Each interaction results in a small energy loss and almost no
scattering. The result is that protons travel in nearly straight lines as they stop and the
dispersion in the energy loss or range, when traversing a material is small. Electrons, on the
other hand, can lose a large fraction of their energy and undergo significant angular scattering
in a single collision with a target material electron since both particles have the same mass. In
addition, the electron direction of motion can also be changed, to the point of being reversed,
by a collision with an atomic nucleus. Since the energy loss process is very different for
heavy charged particles, (protons and other ions), and light particles The process of energy
loss by electrons is much more complex than for ions due to the electron’s small rest mass.
Electrons paths in the material are typically full of sharp turns, some of them severe enough
to cause the electron to be backscattered out of the material. The backscatter fraction depends
on the Z of the material and rises from about 10–15% for Alto about 40% for NaI for
electrons with energies below 1 MeV. Another factor in the electron stopping power is the
emission of bremsstrahlung or electromagnetic radiation. Bremsstrahlung is emitted
whenever the electron is accelerated, such as when it is deflected through a large angle or
undergoes a collision with a large energy loss. Nevertheless, range-energy tables for electrons
have been developed and are used to compute model energy losses in a thin absorber case and
in determining the range of electrons to compute the amount of shielding needed for a
particle detector or a radiation-sensitive spacecraft component. Finally, angular distributions
of scattered electrons have been studied and the published reports and references therein) can
be used to statistically predict electron behaviour in matter. (Electrons), it is natural to
consider them separately.
Silicon Solid-State Detectors
Silicon solid-state detectors (SSD) are built using ultra-pure
silicon crystals. They are manufactured in several types,
depending on the dopants introduced to the crystal and the method
by which they are introduced into the crystal lattice. However, the
basic operation of all the types of detectors is the same. As a
charged particle traverses the crystal it interacts with the
valence band electrons and promotes them to the conduction band.
Once in the conduction band, electrons are free to move in
response to an externally applied electric field. For each electron
promoted, a hole is created in the valence band. The whole behaves
as a positively charged particle and also moves in response to the
electric field. Both electrons and holes are referred to as
carriers. As the secondary (conduction band) electrons move
through the crystal, they also interact with the valence electrons
and create more Electron-hole pairs. Approximately 3.6 eV are
required to produce one electron-hole pair in silicon. Crystals
have electrodes on both sides and operate as a reverse biased
diode. The applied electric field attracts the carriers to their
respective electrodes and prevents them from recombining. Total
charge collected at the electrode is proportional to the energy
lost in the crystal by the incident particle. If the incident
particle is stopped in the crystal the collected charge is
proportional to the particle energy. A tube of high-density plasma
(1015–1017 cm−3) is produced in the wake of the incident particle.
The applied electric field must be sufficiently strong to drive the
two types of carriers apart before they can recombine. In addition,
the field must also result in a collection time much smaller than
the carrier lifetime, limited by recombination and by trapping of
carriers by impurities and defects in the silicon lattice. Under
typical conditions, carrier collection times are of the order of
10−8–10−7 s, which requires the detector carrier lifetimes to be
about 10−5 s. Defects and impurities in the silicon lattice result
in the creation of trapping centres and recombination centres. The
first centre captures either electrons or holes, and due to the long
trapping time, often of the order milliseconds, prevents them from
being collected. The second type of centre can capture both
electrons and holes and cause them to recombine. Two additional
aspects of silicon solid-state detectors bear a brief discussion.
The first is the effect of a dead layer, of order 100 nm thick, at the
surface of the detector. The energy lost by incident particles
transiting this layer to enter the active volume of the detector
does not contribute to the creation of free charge in the detector
proper and the resultant signal. If the energy of the incident
particle is to be recovered from the solid-state detector
response, the energy lost in the dead layer (and any energy lost in
passing through the electrode material) must be taken into
account. This is particularly important when low energy particles
are to be detected because those energy losses may be a
significant portion of the original energy of the incident particle.
The second effect is termed mass defect and occurs because a
portion of the energy lost by an ion in the active volume of the
detector does not result in the production of free charge that will
contribute to the signal produced from the detector. That portion
of the total energy loss of an ion that does not produce a free
charge in the SSD increases with the atomic weight of the incident
ion. If the original energy of an incident ion is to be inferred from
the SSD signal, especially if heavy ions are detected, the mass
defect must be known so that the SSD signal can be corrected for
this defect. The current resulting from charge collection, usually
taken from the anode (electron collection), is fed into a charge
sensitive pre-amplifier that converts it into a voltage tail pulse
(fast rise followed by a long decay). This pulse is fed, in turn, into
a linear amplifier that shapes and amplifies the signal to produce
a short, peaked pulse with an amplitude proportional to the
collected charge. Further processing can be carried out using
standard pulse processing techniques. A high energy heavy ion
passing through the detector can produce a very large output
signal that has been known to paralyze the charge sensitive pre-
amplifier for a significant length of time, effectively introducing
an abnormal dead time. The amplifier-discriminator electronics
should be designed to avoid such paralysis. A large number of low
energy particles entering the detector within a short time
compared to the charge sensitive pre-amplifier’s integration time
will mimic the signal from a higher energy particle (pulse pile-
up). Because the pre-amplifier integration time cannot be made
arbitrarily short, there is no electronic way of eliminating this problem. An instrument design
that prevents access to the detector of particles has energies much lower than the desired
threshold energy. It is the only way of mitigating pulse pile-up. The advantages of SSDs are
compact size, good energy resolution, fast timing resolution (coincidence timing is possible at
sub-nanosecond levels), and the ability to tailor the crystal thickness to match requirements.
SSD disadvantages include limitation to small thicknesses (< 1 mm), susceptibility to damage
by incident radiation
The limitation on detector thickness means that protons with energies greater than
about 14 MeV will not stop in the detector. One way of measuring energies of higher energy
particles is to arrange two or more Si detectors, one after the other, in a co-axial configuration
(detector telescope). The most common configuration has two detectors. The incident energy
is set to be the sum of the energy depositions in the two detectors while the pattern of energy
losses in the detectors can uniquely identify the incident particle type, proton, electron or
heavy ion species. Occasionally a final detector element is placed behind the other detector
elements in the telescope and operated in anti-coincidence to identify any particle that
traversed through all detectors in the telescope and would otherwise have its energy and
species misidentified.

Scintillators and Cherenkov Radiators


Scintillators are materials that emit light when an incident charged particle traverses
the material. The light output is, to a good approximation, linear with deposited energy. The
density and stopping power of inorganic scintillators makes them highly useful for high
energy particle spectroscopy for both protons and electrons. Key scintillator applications are
those where SSDs are insufficiently thick to stop the high energy particle and a scintillator is
used to complete the sensor. Inorganic scintillator consists of an ionic crystal doped with
activator atoms. Ionizing particles traversing the crystal produce free electrons and hole and
electron-ion pairs (excitons). These travel through the crystal until they encounter activator
atoms in their ground states and excite them. The subsequent decay of othe f activator back to
the ground state results in the emission of scintillation photons A second type of scintillator,
organic plastics, is used only in limited roles on spacecraft. These materials are low density
and so require greater amount of material to provide the same energy loss as a much smaller
inorganic scintillator. Large detectors require extensive shielding and lead to large and
massive detectors. One area where the plastics are useful is as veto counters or active shields.
A shaped piece of plastic scintillator is placed nearly surrounding the sensors to be shielded.
The signal from the plastic acts as a veto in a coincidence circuit with the detector signal,
since the plastic signal indicated that an out-of-aperture particle must have struck the shielded
sensors. In a large class of organic scintillators, the effect of the incident particles is to
produce a population of electrons in the first excited singlet state and its associated
vibrational levels. These levels quickly decay, with no radiation emission, to the first excited
state. This transition is followed by further decay to the various vibrational states associated
with the ground state with a characteristic time of the order of a few ns (prompt
fluorescence). In this case the fluorescent light is only weakly absorbed by the organic
material making it suitable for use as a scintillator.

An example of the use of SSD’s and scintillators in a flight instrument is shown in Figure
below. The instrument consists of four SSD’s (D1–D4), two inorganic scintillators (S1 and
S2) and plastic scintillator (S3). The SSD’s are used to define the field-of-view of the
instrument. The particle energies of interest are so high (up to 400 MeV) that no collimation
or shielding is possible. The two inorganic scintillators are composed of a dense material,
Gadolinium Silicate (GSO), and their function is to absorb as much energy as possible from
the incident high energy particles. Finally, the S3 plastic scintillator is used as veto for out-of-
aperture particles striking S1.

One advantage of inorganic scintillators is their high stopping power for spectroscopy of high
energy particles. Organic scintillators have very short emission times, making them excellent
choices for use as a veto in coincidence circuits. Finally both scintillator types can be
physically shaped to meet design requirements. One disadvantage of scintillators is the
relatively poor energy resolution relative to SSDs. This is due to 1) much greater energy
required to produce a scintillation photon (40–50 eV) than the energy required for the
production of electron-hole pair in silicon (3.6 eV) and 2) inefficiencies in transporting the
photons to a light measuring device. Another disadvantage is that scintillator light is usually
measured by photomultipliers. These are relatively large devices and this makes it difficult to
design, package and shield the sensor. In the last few years, with the development of fast,
high light output scintillators it has become feasible in some cases to use photodiodes and
avalanche photodiodes, which resolves the issues connected with photomultiplier size.

Langmuir Probes
Langmuir probes (LP) have been used extensively on rockets and satellites to measure
ionospheric electron and ion densities, electron temperature, and spacecraft potential. This
section discusses the design and implementation of LP measurements, with particular
emphasis on cylindrical probes that have been used more extensively than any other type.
The key lesson of more than three decades of LP use in space is that the accuracy of the
measurements depends primarily on avoiding implementation errors. Experience with
Langmuir probes since 1959, has shown that most measurement errors arise from: the type of
collector surface material used, failure to avoid surface contamination or failure to provide
for inflight cleaning of the collector, failure to place the collector an adequate distance from
the spacecraft and
from various
appendages that
might interfere with
its access to
undisturbed plasma,
failure to design the
electronics to
adequately resolve
those portions of the volt-ampere curves that contain the desired geophysical information,
and failure to assure that the spacecraft can serve as a stable potential reference for the
measurements. \
Langmuir Probe Technique (https://www.youtube.com/watch?v=u44NH1o6Tp8 )
The LP technique involves measuring the current to a probe as a function of an applied
voltage. The current is the sum of the ion, Ii , and electron, Ie, currents collected by the probe.
The voltage is applied to the probe with respect to the satellite reference potential. With
careful spacecraft design, the applied potential is proportional to the voltage between the
probe and the undisturbed plasma being analyzed. The resulting current-voltage
characteristic, called the “volt-ampere curve” or “V-A curve” is a function of the plasma
parameters, electron density, Ne, electron temperature, Te, ion mass, mi , and ion density, Ni,
as well as the probe surface properties, and the probe geometry and orientation relative to the
spacecraft velocity, magnetic field vector, and spacecraft body. Simple Langmuir probe
theory [Mott-Smith and Langmuir, 1926] shows that the amplitude of the electron current, Ie,
is proportional to Ne, and the amplitude of the ion current, Ii , is proportional to Ni . The
current of retarded particles is proportional to the exponential of voltage, V, divided by
temperature,

Thus the logarithm of the retarded electron current is inversely proportional to the electron
temperature, and the V-A curve is a strong function of temperature.

pic: Block
diagram of the
Langmuir probe
instrument and a
theoretical V-A
curve.
Many years of experience indicate that the accuracy of the measurements depends on the
details of the implementation. The factors most critical to success involve; (1) using a
relatively short probe that has inherently low surface patchiness and that can be cleaned very
early in the mission by electron bombardment, (2) mounting the probe on a boom that is long
enough to place the collector in the undisturbed plasma that lies beyond the spacecraft ion
sheath, (3) using adaptive circuitry in the electronics to resolve the V-A curves over a wide
range of Ne and Te values, (4) designing the spacecraft to have an adequate conducting area
and a solar array that does not cause Vp to be excessively negative. The degree of success
achieved can be determined by a series of internal consistency tests. First, the volt-ampere
curves should exhibit the form indicated by the theory. The ion saturation regions should be
approximately linear and have a slope that is consistent with the known mean ion mass in the
region. The electron saturation region should exhibit the expected voltage dependence. The
electron retarding region should be truly exponential over several kTe. The Ne measurements
should be consistent with the Ni measurements at densities where the two techniques overlap,
recognizing that end effects (for short probes) will tend to cause Ne to be slightly higher than
Ni. No hysteresis should be evident in the curves when the probe voltage is swept in opposite
directions, and no time constants should be evident in Ii immediately following the sweep
retrace. These criteria for internal consistency are very demanding and, if met, one can gain a
high degree of confidence in the measurements. It can be concluded that Langmuir probes
can provide accurate ionosphere measurements when several important implementation
challenges are successfully addressed.

6. HUBBLE TELESCOPE :
The sophisticated hardware/so
f aware of the Hubble Space
Telescope is being managed,
developed, integrated, tested,
and operated by a complex
organization of participants. To
achieve the scientific results
expected from the observatory,
this team of highly qualified
participants must interact
effectively and implement
systems engineering discipline
into every aspect of their
responsibilities and effort.
The Hubble Space Telescope is comprised of three major modules as depicted in Figure 1:
The Optical Telescope Assembly (OTA),
The Support Systems Module (SSM), and Scientific Instruments (SI's).
The OTA and SI's contain all of the optical elements

and scientific payload; whereas the SSM provides all necessary spacecraft functions like electrical
power and management, communications, data management, precision pointing control, stray
light protection and structural support between the OTA, SI payload and the STS. Module system
configurations and performance are the results of scientific mission objectives and requirements
constrained by the STS and orbital operational considerations.

Optical Telescope Assembly (OTA)

The mission objectives and optical system requirements are mutually supportive; the optical
requirement is derived from the mission objectives. For example, the optical wavefront
quality of A/20(wavelength) will provide a performance level that is essentially limited by
diffraction.

This results in an angular resolution capability that is a function of the telescope aperture and
incoming wavelength. The telescope aperture was essentially fixed by considerations of
optical performance requirements and STS payload bay diameter constraints. The payload
bay could accommodate a telescope with an aperture as large as 2.4 m. In turn, this size
aperture yields a diffraction-limited image of 0.06 arc-seconds in size without considering
any distortion from pointing instability. Judgment indicated that the 0.06 arc-seconds of
angular resolution would allow a
Arcseconds
After chopping a circle up into 360 equal slices
(degrees), we took one of those slices and
chopped it up into 60 tiny slices (arcminutes).
Now we need to take one of those slices, and
chop it up into 60 even smaller slices. Each of
reasonable margin of safety for the primary optical system to offset some pointing instability
and manufacturing deficiencies. Specifically, this amount of margin would support the 0.1
arc-seconds resolution requirement in a mirror quality range of A/20 down to A/13.5.
However, if A/20 or better was achieved then this margin could be applied as a relaxation to
pointing stability. Since a final figure quality of A/19.2 was realized, it was decided to retain
the pointing stability at the initial requirement of 0.007 arc-seconds (RMS).Another
fundamental reason for selecting an overall system with a focal ratio of F/24 was the physical
impracticality of attempting to accommodate five scientific instrument apertures within a
focal plane of a smaller area and at the same time achieve sufficient clearances between
instruments to support in-orbit replacements. The broad spectral coverage (1216 angstrom -
1000microns) requirement influenced the primary optical system in two important ways. It
prescribed that an all-reflective system be used and that Magnesium Fluoride be applied as an
overcoating material for the reflective aluminium mirror coatings. A refractive optical system
could not be designed to accommodate this required spectral range. The overcoating material
selected was capable of passing the shorter wavelength while also being one of the more
moisture resistant materials available. The HST is a Richey-Chretien design. To achieve the
A/20 wavefront quality for the total optical system, the primary mirror was allocated an A/60,
the secondary mirror was allocated a figure quality of A/lOO, and the relative alignment
integrity between them had to be maintained within a ±2 micron envelope. The figure
qualities of the mirrors were manufacturable but challenging with available current
technology; however, the severe constraints on total system alignment indicated that the
sophisticated structural support systems required a dynamic real-time misalignment
correction mechanism. To fill this need an active optical interferometer system was
developed. The Optical Control System (OCS), a part of the Fine Guidance System (FGS) is
located at the telescope focal plane, as are the scientific instruments; thereby, it is capable of
providing total optical system alignment knowledge at the precise location where the
incoming light wavefront is transferred to the scientific payload. This device views a star
through the telescope optics and senses wavefront aberrations resulting from primary optical
system distortions and total telescope assembly misalignments. This knowledge is transmitted
to the ground where it is analysed to determine what commands need to be generated to
correct any undesirable aberrations. Correction commands received by the HST can cause the
secondary mirror position to be adjusted for tip, tilt, decentre, and focus. The primary mirror
figure can als~ be corrected by properly commanding numerous push-pull actuators. The
structural design of the OTA was driven by the requirements for stiffness, high dimensional
stability, strength and lightweight. Metal is used for the main ring of the primary mirror since
this is also the major load-carrying member between the scientific instruments, the OTA, and
the SSM. However, the metering structure, which supports the secondary mirror at a precise
distance from the primary mirror and the remaining truss assembly is fabricated from
graphite epoxy. Using this material arrangement, with active and passive thermal control
methods, yields a structural system that satisfies loads and optical requirements across the
range of environmental conditions expected. To satisfy the 0.1 arc-second image resolution
requirement, the telescope line-of-sight must be held extremely stable relative to the object
under observation. In this telescope design, the larger portions of image quality limitations
were allocated, for practical reasons, to the optical systems. The pointing control system was
tasked to meet its requirements with only 10% of the total error budget available. This
established a 0.007 arc-second(RMS) line-of-sight stability requirement. The the0.01 arc-
second pointing requirement was derived from the 0.1 arc-second minimum SI aperture
requirement. Since a
star image of 0.1 arc-
seconds (telescope
resolution goal) will fill
a0.1 arc-second SI
aperture, the PCS was
designed to be capable
of pointing accurately
to 0.01 arc seconds.
Thus, the PCS can
locate the smallest
resolved image in the
centre of the smallest
Aperture and void significant signal 10sses. 4 The PCS depends on numerous state-of-the-art
attitude sensors which, due to individual limitations infield-of-view FOV), sensitivity, or the
accuracy, forced an overall system design that uses an architecture of individual acquisition
modes wherein one sensor or mode dominates until the pointing accuracy attained is within
the acquisition range of the next higher-order sensor or mode. The three sequential modes of
operation: (1) manoeuvring to the target area, (2) FGS guide star acquisition, and (3)
acquisition of the target star by the SI, will be described to explain this process achieves the
accuracies required

1. Manoeuvring Attitude reference during manoeuvring is maintained by using at least


three of the six available Rate Gyro Assemblies (RGA's). The final accuracy expected at the
termination of the maneuvere is 45 arcseconds. This accuracy is necessary to obviate the need
for an attitude update with two of the three Fixed Head Star Trackers (FHST's) available.
Principle error sources during the manoeuvre are initial uncertainties in roll attitude which
will be translated as pitch/yaw errors and gyro drift errors. Of course, the larger the
manoeuvre angle the greater will be the accumulation of errors. If a manoeuvrer larger than
90 degrees is attempted, it is almost a certainty that a FHST attitude update will be necessary.
Pre-selected guide stars of at least Mv are initially acquisitioned by the star trackers with their
8 degree FOV to establish a baseline for the pointing process. When the HST maneuverer is
completed, the line-of-sight pointing accuracy is expected to be within 30 arc seconds. This is
well within the 45 arc-second maximum search radius of the next sensor - the FGS. 2. Fine
Guidance Acquisition Located in the OTA focal plane, this sensor identifies pre-selected
guide stars imaged at the periphery of the OTA FOV. Two guide stars are acquired to control
pitch, yaw, and roll. Two FGSs are used for this acquisition mode. First, one FGS locks onto
its designated guide star and then the second complete the same process. Figure 2 depicts the
location of three available FGSs and their assigned FOVs in the OTA focal plane.
Acquisition starts with a search process while the HST is being held to a position by the
RGA's. In this search process, a 3 arc-second square aperture is controlled to trace a spiral
pattern in the vicinity of the pre-selected guide star. It continues this process until a star that
fits the photometric intensity is found. The maximum search radius of this process is 45 arc
seconds. At this point, the FGS switches into a coarse tracking mode. The 3 arc-second
aperture now traces a circular movement around the guide star. Using internal logic, this
motion finally lances an error signal when the star is equally intercepted by the four corners
of the aperture. Star position is now known within 0.01 arc-seconds. He FGS then transitions
into fine track mode where the capture range of the sensor is 0.02 arcseconds. This process
locates the centroid of the star and positions the telescope line-of-sight to be perpendicular
and maintained to the starlight wavefront. HST attitude stabilization is now transferred from
the RGA's to the FGS's.
3 ARC SEC SQUARE APERTURE FINE GUIDANCE SUBSYSTEM IFGSI TOTAL
FIELD (3 PLACES) AXIAL BAY SCIENTIFIC INSTRUMENTS ISI) DATA FielD 14
PLACESI

Target Star Acquisition The guide stars used by the FGS were pre-selected to enable the
capture of a target star (observation star) within the telescope’s FOV. The scientific
instrument can now observe the target star at its 66 F. S. Wojtalik aperture with its internal
scientific detectors. In some instances when it is desirable to position the target star at a
precise location within the SI apertures, it becomes necessary to take an image, either analyze
it on board or the ground and command the HST to perform a small correcting manoeuvre.
The HST pointing control
system uses four reaction
wheels for attitude control and
electromagnetic torques to
neutralize the effect of gravity
gradient torques on exaction
wheel momentum build-up.
SYSTEMS ENGINEERING
CONTRIBUTIONS
The HST program has benefited
from a well-organized systems engineering effort. This emphasis, intense in the past several
years, has been rewarded with many important issues being resolved in an optimum manner.
Some of the designs, methods and other contributions are considered to be technologically
significant. Several of the contributions are described in this section. Rate Gyro Assembly
(RGA) The RGA's are key to the successful operation of the HST pointing and control
system. Except for the performance improvement changes that will be described, the rate
sensor design is the same one used by NASA for the IUE and HEAO programs. define
pointing stability for the HST has been the principal force for improved gyro noise
performance. As the program evolved and data became more available on hardware and
vehicle structural characteristics, it became clear that a definite need existed to provide some
margin for fine pointing stability. Because of the increased importance placed on RGA noise
performance, the mechanical and electrical error sources within the assembly were
characterized by tests and evaluations. This action led to a thorough understanding of the
contributors to RGA noise, and a program of specific design modifications that improved
noise performance by at least an order of magnitude was accomplished. The predominant
noise source within the RGA was mechanical noise associated with the gyro wheel. This
mechanical source of error includes gas flow turbulence around the wheel and wheel hunting
about the synchronous motor speed. Each of these sources produced torques on the gyro float
mechanism that are interpreted by the electronics as rates sensed by the unit about its
sensitive input axis. The noise due to motor hunting was minimized by an active high gain
servo loop that sensed hunting and properly controlled electrical motor operation. Turbulence
with the gyro float mechanism was determined, through an elaborate test program where
special and unique test equipment was developed, to be the dominant noise-producing source.
Seismic noise, though not a contributor to the real RGA inherent noise-producing sources,
must be considered in characterizing the performance of this precision and extremely
sensitive device. To characterize the noise produced by the HST rate sensors and to test
design improvements that resulted, a singular and unique seismically quiet test site was used.
It was determined that a shroud, if included in the wheel design, would change the very
turbulent gas flow pattern within the float into a laminar flow, thus minimizing the noise
associated with this phenomenon. The wheel speed for the HST gyro is 19,200 RPM. Several
shroud designs were conceptualized, but through an analytical process, one was selected for
implementation (Fig. 3). With this shroud design, the measured noise level of the RGA has
been reduced by factors of 3.6 to 10.0 across the usable frequency range. Reaction Wheel
Isolation The four reaction wheel assemblies (RWA's) used in the HST are installed in pairs
within two separate compartments of the SSM. The spin axis of the paired assemblies sharing
a compartment is elevated!20 degrees from a plane that is normal to the telescope line-of-
sight. The planes
formed by the spin
axes of paired
assemblies are
separated by 90
degrees. Each RWA is
0.64 meters in
diameter, 0.50 meters
high and weighs 48
kilograms. At a wheel
speed of 3000 rpm, a
torque of 0.82 Nm is
produced. The
demanding pointing
and stability
requirements of the
HST led to a program
of quantization
wherein noise
contributing sources were identified, isolated, and characterized. The most significant
disturbances in the RWA's were found to be the axial forces generated by interactions of very
small imperfections in bearing inner and outer raceway and balls. Initial attempts at
eliminating these disturbances were concentrated on a bearing selection program. This effort
succeeded in reducing the forces present at beginning of life but did not eliminate the
concerns for unacceptable high forces that could be experienced as the bearings wear during
the mission. Additionally, as the HST orbit decays and aerodynamic drag necessitates higher
wheel speeds for pointing control, the real potential existed for higher disturbance forces
early in mission life. Faced with these data, designers strongly recommended the retention of
the precision bearing selection program but also advanced the design of a device that would
physically isolate the induced vibration of the RWA's from the vehicle structural dynamics.
Several isolation concepts were fabricated and tested before a final design was selected.
Included in the evaluation program were a viscoelastic polymer, a stranded and coiled steel
cable, and a purely viscous damping device. The isolator system that exhibited the best
repeatable performance characteristics was the viscous damping device. Some of the unique
design features that this concept has over the other options are:
(1) Extremely small stiction and Deadband,
(2) Linear operation over a very large dynamic range,
(3) is an effective damper for vibration input amplitudes as small as 5 Nanometers,
and (4) stiffness and damping remain constant over a dynamic amplitude range ratio of
100,000 to 1.

Figure 4 shows a cross-section view of the isolator. Figures 5 and 6 provide a comparison of
RWA induced instability without and with the isolator modification.

Spectrometers

Miniature Energetic Ion Composition Instrument


Plasma (ionized gas) is the most common form of matter in the universe. The interactions of
ions and electrons in space plasmas with magnetic and electric fields in the atmospheres of
stars or the magnetospheres of planets can accelerate the ions to very high energies. Space
scientists have been trying to understand the origins of planetary magnetospheres and solar
wind for more than 40 years. They examine the spatial and temporal distribution of energetic
ions as well as their atomic and isotopic composition to deduce their origin and details of
their Acceleration and propagation. These studies have been conducted with a variety of
instrument types over the years. During the past 15 years, instruments that measure both
particle energy and speed have shown that they can determine the particle mass with atomic
species or sometimes isotopic resolution over a wide range of energies. These instruments,
known as time-of-flight (TOF) spectrometers, measure particle speed by the time it takes the
particle to fly across the innards of the instrument. A very high-resolution TOF spectrometer
onboard the Advanced Composition Explorer (ACE) mission is studying the isotopic
composition of solar energetic ions. Another TOF instrument on the Cassini mission will
examine the atomic composition of energetic particles in Saturn’s magnetosphere. These are
excellent tools, but TOF instruments have been power-hungry (15 to 25 W) and massive (10
to 20 kg). APL scientists and engineers have designed a miniature TOF spectrometer that is about
the size of a hockey puck, yet it has all of the sensitivity of its massive predecessors. The particle
flight path in the instrument head is only 5 cm, compared with the 50-cm flight path in the ACE high-
resolution spectrometer. To measure 10-keV protons in this instrument, we must accurately measure
time intervals of 36 ns, and to measure 1-MeV protons, we must measure a flight time of only 3.6 ns
(about the time that light travels 1 m). Making these very short time interval measurements used to
require high-powered electronics. In the TOF instrument, a new type of high-speed integrated circuit
is replacing boards of electronics. This chip, developed at APL, measures time intervals with an
accuracy of 50 ps for a few milliwatts of power. The TOF chip, although developed for energetic
particle instrumentation, may open up entirely new ways of designing instruments for many different
types of measurements. A comparison of the sizes of the Cassini and the miniature TOF instruments
is shown in Fig. 3.

Figure.: A comparison of the time-of-flight (TOF) sensor for the


Magnetospheric Imaging Instrument (MIMI) on the Cassini mission
and the sensor for the Miniature Ion Composition
Instrument (inset). This comparison overstates the size difference
between the two because the TOF portion of MIMI is only the large
rectangular box that forms the lower half
of the instrument. The large deflection plates at the top exclude
charged particles and only
admit energetic neutral atoms.

Figure. The Laser Ablation Mass Spectrometer as it may Laser Ablation Mass Spectrometer
operate on the surface of an asteroid. The laser focuses a
pulse of high-intensity light, shown in red, at a small spot on Another example of space instrumentation is of
the rock surface. Some of the ions produced in the fireball an entirely new form—a miniature mass
enter the instrument and are deflected by the electric field
down to the microchannel plate detector (the gray annular spectrometer for use on the surface of comets,
disk at the front of the instrument). The camera at the top
asteroids, and planets. Planetary science from
lets the scientist see which rock grain has been targeted
space is about to enter the third stage of exploration. The first stage is gross reconnaissance. All of the
planets except Pluto have been visited by at least one spacecraft flyby, and spacecraft have flown past
four asteroids and one comet. We have seen the gross structure of these bodies.

The second stage is orbital missions to examine the structure of planetary bodies and begin the
process of understanding their origin and evolution. Venus, Earth, Mars, and Jupiter have already had
orbital missions to examine the shape of the surface, the structure, and the coarse composition of these
bodies.
the laser-ablated ions travel from the sample surface into the mass analyzer and are redirected in a
two-stage reflectron onto a dual microchannel plate (MCP) detector, arriving at a sequence of times
proportional to the square root of their mass-to-charge ratios, i.e., (m/z) 1/2
The APL-built NEAR spacecraft is about to orbit the asteroid 433 Eros for 1 year to examine it
closely. The third stage of exploration requires landed packages to examine portions of the surface in
great detail, e.g., the atomic and isotopic composition of the rocks, soils, and regolith. Atomic analysis
can reveal the mineral composition of the rocks, while isotopic analysis contains information about
the origin of those materials. For example, the ratio of neon isotopes 20Ne/22Ne in terrestrial rock
samples is very different from protosolar grains. These grains are believed to be pristine samples of
the original solar nebula and have been unchanged for billions of years. One example of a third-stage
exploration mission was the Mars Pathfinder. 6 The Sojourner rover carried an alpha, proton, X-ray
(APX) instrument to examine the rock composition. It was able to determine the rough atomic
composition of a few rocks on the Martian surface. APX instruments are limited, however, because
they only measure elements in the atomic number range from magnesium to iron, and they also have
atomic, not isotopic, resolution. For the last 3 three years, APL has been developing the Laser
Ablation Mass Spectrometer (Fig. above).
This instrument fires a very short laser pulse at the surface of a rock to vaporize and ionize a tiny
segment. Some of the ions enter a reflectron analyser where an electric field turns them around and
directs them into a detector. The properties of the electric field in the reflectron are designed so that
the time from the laser pulse until the ions reach the detector is independent of the energy of the ion
and depends only on its mass. The lightest ions (hydrogen) reach the detector first, followed by the
heavier ions, all the way to the highest masses for which the instrument is designed (~1000 amu).
Thus, just by watching the time history of ions hitting the detector, the full isotopic composition of the
sample is determined. The whole measurement takes only a few tens of microseconds. Laser ablation
mass spectrometry has been used in laboratories for a few years and is regarded as one of the most
sensitive tools for microscopic composition analysis. Laser ablation spectrometers are a natural
candidate for space instrumentation except that most are very large, not something that could be put
onto a Mars Pathfinder–sized lander. The instrument being developed at APL will be about the size of
a 1-L bottle and weigh less than 5 kg. Figure shows the isotopic composition of a meteoritic sample
measured by the APL mass spectrometer. This instrument will have another important feature, i.e., a
microscopic camera in the optical train to show the exact part of the sample that is being measured.
Since the spot being vaporized is less than 0.005 cm in diameter, it can be focused on individual
grains in the rock. With this capability, scientists need not work with just the average bulk
composition. They can examine the minerals in the rock individually. This is especially important for
breccias, rocks assembled from angular fragments broken off parent rocks that have been cemented
together into a composite rock. If the laser power is lowered somewhat, the molecules in the sample
can be ionized without breaking them apart. This can help understand the exact chemical form of the
minerals in rock or soil. It may also allow the instrument to search for organic compounds on the
surface of other planets.

Near Infrared Camera and Multi-Object Spectrometer,


NICMOS is a second-generation instrument installed on the HST during SM2 in 1997. Its
cryogen was depleted in 1998. During SM3B in 2002, astronauts installed the NICMOS
Cooling System (NCS), which utilized a new technology called a Reverse Brayton-Cycle
Cryocooler (see Fig. 4-9), and NICMOS was returned to full, normal science operation. The
mechanical cooler allows longer operational lifetimes than expendable cryogenic systems.
Instrument Description: NICMOS is an all-reflective imaging system: near–room-
temperature fore optics relay mages to three focal plane cameras contained in a cryogenic

dewar system. Each camera covers the same spectral band of 0.8 to 2.5 microns with different
magnifications Next the corrected image is relayed to a three-mirror field-dividing assembly,
which splits the light into three separate, second-stage optical paths. In addition to the field-
dividing mirror, each second stage optic uses a two-mirror relay set and a folding flat mirror.
The field-dividing mirrors are tipped to divide the light rays by almost 4.5 degrees. The tip
allows physical separation for the two-mirror relay sets for each camera and its FOV. The
curvature of each mirror allows the required degree of freedom to set the exit pupil at the cold
mask placed in front of the filter wheel of each camera. A corrected image is produced in the
centre of the Camera 1 field mirror. Its remaining mirrors are confocal parabolas with offset
axes to relay the image into the dewar with the correct magnification and minimal aberration.
Cameras 2 and 3 have different amounts of astigmatism because their fields are at different
off-axis points from Camera 1. To correct residual astigmatism, one of the off-axis relay
mirrors in Camera 3 is a hyperbola and one of the relay mirrors in Camera 2 is an oblate
ellipsoid. Camera 2 also allows a coronagraphic mode by placing a dark spot on its field-
dividing mirror. During this mode, the HST is maneuvered so that the star of observation falls
within the Camera 2 field dividing mirror and becomes occulted for coronagraphic
measurements. All the detectors are 256 x 256-pixel arrays of mercury, cadmium and
tellurium (HgCdTe) with 40-micron pixel-to-pixel spacing. An independent, cold filter wheel
is placed in front of each camera and is rotated by room-temperature motors placed on the
external access port of the dewar. A multilevel, flat-field illumination system corrects
detector onuniformities. The light source and associated electronics are located in the
electronics section at the rear of the instrument. IR energy is routed to the optical system
using a fiber bundle. The fiber bundle illuminates the rear of the corrector mirror, which is
partially transparent and fits the aperture from the fiber bundle. The backside of the element
is coarsely ground to produce a diffuse source. Three detector cables and three detector clock
cables route electrical signals from the cryogen tank to the hermetic connector at the vacuum
shell. The cables consist of small-diameter, stainless-steel wire mounted to a polymeric arrier
film. They are shielded to minimize noise and crosstalk between channels. (Shielding is an
aluminized polyester film incorporated into drain wires.) The cables also have low thermal
conductivity to minimize parasitic heat loads. In addition, two unshielded cables connect to
thermal sensors used during fill and for on-orbit monitoring. Besides processing signals from
and controlling the detectors, the electronics prepare data for transmission to the HST
computer, respond to ground commands through the HST and control operation of the
instrument. ICMOS uses an on-board 80386 microprocessor with 16 megabytes of memory
for instrument operation and data handling. Two systems are provided for redundancy. The
detector control electronics subsystem includes a microprocessor dedicated to operation of
the focal plane array assemblies. Two microprocessors are provided for redundancy.

7. SOLAR IRRADIANCE MEASUREMENT SYSTEM

The radiation energy emitted by the Sun in the form of solar radiation is one of the main
driving forces for Earth’s atmosphere. The existence of almost each and every cycle
(~biological as well as physical) on the Earth is because of the entire solar radiation spectrum
ranging from ultraviolet (UV) to infra-red (IR) region. The lower atmosphere near to the
Earth’s surface is well balanced by the visible and IR spectrum of solar radiation. Whereas
UV spectrum is equally important to control the middle and upper atmosphere of the Earth.
And to support the living process as well as photosynthesis, the visible spectrum is much
more essential than other spectrums of solar radiation. Eddy demonstrates that a small
variation in solar irradiance has a great effect on Earth’s atmosphere (Eddy, 1976). Later on,
this statement is supplemented by more recent research work. As evidence of the above
statement, Lean and Rind showed that global temperature at Earth’s surface is increased by
around 0.1°C with the variation of solar irradiance during solar cycle 23. So, solar radiation
has a very significant role in the life cycles of the Earth. The complete investigation of the
solar radiation signal also provides useful information regarding different astrophysical
phenomenology. In this context, Kopp demonstrates that the variation in solar irradiance is
well correlated with solar activity such as faculae and sunspot. So the measurement of solar
irradiance signal is a very significant prospect for researchers.The solar radiation signal is
measured in the form of either Total Solar Irradiance outside the Earth’s atmosphere [> Earth
radius (~6370 Km) + thickness of 99% air mass (~ 30Km)] or Global Solar Irradiance at the
Earth’s surface. To collect Total Solar Irradiance data, NASA placed some satellites like
UARS-SOLSTICE (1991 - 1999), SUSIM UARS (1991 - 2002) and SCOCE (2003 – Feb.,
2020) etc. But to understand the effect of solar irradiance on Earth’s atmosphere, Global
Solar Irradiance has equal importance with Total Solar Irradiance. Modelling of Global Solar
Irradiance data from measured Total Solar Irradiance information confronts lots of problems
due to blockage of up to 70% irradiance by the clouds and Earth’s atmosphere. So
measurement of Global Solar Irradiance data becomes an important research area for
researchers.

A conventional Pyranometer is mostly used to collect the long wave global solar
irradiance signal by utilizing a thermometric type sensor. Whereas the global solar irradiance
is measured by a radiometric type sensor, which collects shortwave radiation of the Sun.
Primarily, silicon photovoltaic cell is used as a radiometric type sensor to measure solar
irradiance and attains an accuracy of around ±13 W/m2. But those systems are expensive and
temperature-sensitive. Some researchers are attempted to compensate for this temperature
effect using first-class thermopile type thermal sensor but attained it partially. Photoresistor
type radiometric sensor is also used due to its lower cost. But it has a slower response
compared to others. Junction semiconductors like photodiode as well as phototransistor is
mostly used in radiometric application as it is sensitive to a broad range of wavelength
compare to other radiometric sensors. Radiometric type sensor have some traditional
shortcomings compared to conventional thermometric type sensors such as direct response
and temperature dependencies. Radiometric type pyranometer is differed by <± 3% from
thermoelectric type pyranometer in terms of cosine or directional response which has a
significant role on solar radiation measurement. The influence of ambient temperature on the
radiometric type sensor is also higher than that of for the thermoelectric type sensor.

But it has some other advantages over thermoelectric type sensors. The speed of
response for radiometric type sensor is very less around 10 µs compare to 1 – 10s for
thermoelectric type sensor (Coulson, 1975). The average measurement uncertainty for
thermoelectric and radiometric type sensors is around ±5% and ±2.4% respectively. So
radiometric type sensors are mostly used in the solar irradiance measurement system to make
it lower cost compare to conventional pyranometer.

The current setup also tried to overcome the above-mentioned shortcomings


associated with radiometric type sensors. This current system has some features as follows:
a. It minimizes the cosine or directional response by using multiple sensors.
b. It minimizes the influence of the temperature by a temperature control
circuit.
c. It protects the sensors from outside environmental hazards by a flexible
electro-mechanical window. It enhances the degree of portability by using a
reliable wireless communication channel between sensors and the monitoring
station.
d. It is lower in cost compared to another commercially marketable
pyranometer. The complete work is organized as follows: the methodology
used to develop this system is described in section 7.2. Section 7.3 covers the
calibration procedure and performance analysis of the developed system with
respect to other systems. Finally, the conclusion regarding this work is noted
in section 7.4.

Methodology:

The compatible and low-cost system is designed to collect the short wave ground solar
irradiance signal. The entire module is subdivided into two components named sensor station
and monitoring station. The sensor station collects the global solar irradiance signal at a
programmable sampling interval and represents it in 10 bit compressed digital format. The
block schematic diagram of the whole arrangement is shown in Fig. 7.1.

Fig. 1. Block schematic diagram of the whole system

7.2.1. Sensor Station:

The sensor station is potable (12.5 cm × 11.5 cm × 16 cm) and automated in nature. This
section can be climbed on the rooftop of any building provided that the focal area has on
obstruction due to neighboured affairs. This section is materialized by sensor module, data
acquisition and transmission unit.

● Sensor Module:

The sensor module of the designed arrangement is actualized by a radiometric type sensor to
fascinate on the shortwave radiation of the Sun. The sensor is the fundamental component of
any solar irradiance measurement system. So selection of the sensor element has required
rigorous examination of commercially available radiometric sensing modules. The prime
characteristic of any solar irradiance measuring sensor is sensitivity. Regarding sensitivity, p-
n junction semiconductors diode, as well as transistor furnished more sensible to input
irradiance, compare to other radiometric sensors. To select the efficient sensor element in
terms of sensitivity (expressed in equation 1), characteristics like Responsivity (Amp/Watt)
and Irradiance sensible area (mm2) are examined with wide varieties of the photodiode as
well as a phototransistor.
( )
Output current deflection Amp
Sensitivity ( S )= (7.1)
Valueof input insolation producing deflection watt
2
m

Datasheet of the various photodiode, as well as phototransistors like OSD5-5T, S9219-01,


OSD15-5T, BPW21 and L14G2, are analysed to formulate Table 7.11 comprising Sensor
Responsivity (Amp / Watt), Sensible Area (mm2) and Cost per Unit (₹).
To formulate the Sensitivity of each sensor element, the output current deflection (Amp) is
calculated for a particular input irradiance (500 Watt/m2) using the following equation and
represented in Table 7.2.
Output Current Deflection ( I )=Input Insolation× Responsivity × Sensible Area ( Amp ) (7.2)

Watt Amp
I OSD 5−5 T =500 × 0.15 ×5 ×10−6 m2=3.75 ×10−4 Amp=0.375 mA (7.3)
m 2
Watt

Watt Amp −6 2 −3
I S 9218−01=500 ×0.22 ×12.96 × 10 m =1.42× 10 Amp=1.42 mA (7.4)
m
2
Watt

Watt Amp −6 2 −3
I OSD 15−5 T =500 × 0.21 × 15× 10 m =1.57 ×10 Amp=1.57 mA (7.5)
m
2
Watt

Watt Amp
I BPW 21=500 × 0.34 ×7.34 × 10−6 m2=1.25× 10−3 Amp=1.25 mA (7.6)
m 2
Watt

Watt Amp −6 2 −3
I L 14G 2=500 × 6.25 × 16.04 ×10 m =50.12 ×10 Amp=50.12mA (7.7)
m
2
Watt

From Table 7.1 and 7.2, Phototransistor - L14G2 established lower cost per unit as well as
better response in sensitivity. So L14G2 is utilized as the sensor element for this formulated
arrangement. This phototransistor can be used in either common collector or common emitter
mode depending on the response required for a particular solar Irradiance measuring system.

Table .1: Features of different photosensors.


Sensor Element Sensor Responsivity Sensible Area (a) Cost per Unit
(R) (mm2) (₹)
(Amp / Watt)
OSD5-5T 0.15 5 1250
(Photodiode)
S9219-01 0.22 12.96 -
(Photodiode)
OSD15-5T 0.21 15 1330
(Photodiode)
BPW21 (Photodiode) 0.34 7.34 553
L14G2 6.25 16.04 70
(Phototransistor)
Table 2: Comparison of various photosensors in terms of sensitivity
Sensor Element Input Output Current Deflection Sensitivity
Irradiance (mAmp) (µAmp / Watt / m2)
(Watt / m2)
OSD5-5T 500 0.375 0.75
(Photodiode)
S9219-01 500 1.42 2.84
(Photodiode)
OSD15-5T 500 1.57 3.14
(Photodiode)
BPW21 (Photodiode) 500 1.25 2.5
L14G2 500 50.12 100.24
(Phototransistor)

Signal Conditioning Unit:

The solar irradiance signal collected by sensor module is further represented in digital format
using 10 bit A/D converter. So linearity of the sensor element is very significant
characteristic to capture the solar irradiance signal. The output current of phototransistor is
directly proportional with the solar irradiance but the output voltage is affected by load. The
phototransistor with different load values are kept in a fully sunny day and their normalized
output response with input irradiance is shown in Fig. 7.2. It is observed that the output
voltage is flattening with increasing load value. The current design uses 0.1 Ω load resistance
due to its linear nature.

The phototransistor output is fed to an instrumentation amplifier before


representing in digital format for amplification as it varies in the order of µvolt range. The
current design uses AD 620 as an instrumentation amplifier due to high accuracy and low
cost. This amplifier also offers lower power which makes it useful in low powered system.
The circuit diagram of the developed system is shown in Fig. 7.3. The resistance of signal
conditioning section defines the gain of that amplifier as:

Gain= ( RG )
49.4 K
+1
Fig. 7.2. Normalized response of the phototransistor with different load values.

In this current design, RG has been chosen as 650Ω to provide a gain factor of around 77.
After amplification, a low pass filter circuit is used to minimize the possible interference at
the ADC input. The current design is chosen the cut off frequency of that low pass filter at
around 10Hz with R = 6.8KΩ and C = 2.2µF.
1 1
Cut off frequency= = =10.64 Hz
2 πRC 2 ×3.14 × 6.8 ×103 ×2.2 ×10−6

The amplified and filtered data is fed to the microcontroller for further processing.
ATmega32L is chosen in this current design due to its lower power consumption feature.
Microcontroller converts the input signal into their digital equivalent data using in-build A/D
Converter. The current design considered the internal reference voltage for A/D conversion
which makes the system resolution around 1.40 W/m2. The digitized data is compressed
using the DMBRLE technique (discussed in the next section) to enhance the efficiency of
memory as well as transmission channel (Roy et al., 2015). This compression method is
lossless in nature and provides a compression ratio up to 74.56%. The compressed data is
temporarily stored in the internal memory until received any command from the monitoring
station. The compressed characters of the sensor station are communicated to the monitoring
station in a packet format using the ZigBee Pro module at a 9600 baud rate. Upon completion
of one data transmission session, internal memory is refreshed for the next session.
Compressed data are transmitted in the form of a data packet consisting of a header byte and
an error checksum byte for error detection in the receiver end. The data packet format is
shown in Fig. 7.4.

Fig. 3. The complete circuit diagram of the developed system


Data Bytes for Header Compressed Data Bytes Data Byte for Error
checking

Data Byte for Packet


Serial Number

Fig. 7.4: Data packet format for transmission

● DMBRLE Compression Technique:

It can observe that the daily solar irradiance data comprises of several successive same values during
the late afternoon to early morning time as shown in Fig. 7.5. Also, the variation has very low
frequencies changes during early morning, mid-noon and late afternoon time and high-frequency
changes during the rest of the day. These factors are playing the main role to develop the compression
algorithm. The main objective is to compress those successive same sample elements as well as low-
frequency changes region. This developed algorithm is actualized based on jointly First Sample
difference (FSD) and S-Run Length Encoding (S-RLE) techniques. FSD is modified and
consecutively employed to concentrate at the low frequencies region and S-RLE at region having
successive same sample values S. The logic flow diagram of the developed compression algorithm is
shown in Fig. 7.6.
Fig. 5: Typical daily global solar irradiance data.

Fig. 6: Logic Flow diagram of the developed algorithm

To achieve minimum latency between two transceiver devices, one frame comprised of one
original sample along with mean value [K] followed by 256 FSD elements is constructed.
First Sample Different (FSD) elements of the original 10 bit quantized samples are computed
by:

The elements of the FSD array have most of the data between ±10 region and rest within
±100 region, are moved for the magnitude and sign byte encoding (Gupta and Mitra, 2011).
In Sign encoding technique (Gupta and Mitra, 2011), one sign byte (which stores the sign
information of the next eight XFSD elements) is present followed by eight magnitude bytes.
The D7 bit of sign byte represents the sign information of 1st element among the magnitude
eight elements and similarly D0 represent the 8th element as described by:

In magnitude encoding technique, two small FSD values (± 9) are represented by only one
packed byte format (by nibble combination) and others by byte combination (with an offset
of 100). So the rules for Magnitude Encoding Technique are given as:
[Where x (i) represent the first difference element, y (j) represent the magnitude encoded
byte.]

To enhance compression performance to a certain level, a bias value has been considered to
winnow out the negative data as well as sign byte encoding techniques. The bias value [K] is
figured out by acquiring the mean value from one data frame. The magnitude encoding
techniques also have to be modified to catch best performance result. In case of modified
magnitude encoding technique, FSD values from [K-5] to [K+5] are only eligible for nibble
combination and byte combination for the rest.

Lastly S-RLE is applied to further compress the consecutive same sample difference [S] deals
with data stream by only two bytes. The first byte is fixed as 00 (indicates the start of a same
sample value sequence) and second byte is equal to the number of consecutives S element
(indicates the number of consecutive same element). So the rules for Run Length Encoding
Technique are given as:

[Where y (j) is the array after RLE and x (i) is before RLE.]
Table 3: Summarized Compression Algorithm

Original Solar Irradiance Data X1, X2, X3, ..........., Xn-1, Xn


First Sample Difference element ∆1, ∆2, ∆3, ........., ∆n-1, ∆n[∆i = Xi+1 – Xi]
FSD with bias ∆1 + K, ∆2 + K,∆3 + K, .........,∆n-1 + K, ∆n + K [K = bias value]
Nibble Combination: 1 byte = 10 × (∆i + ∆i+1)
Magnitude Encoding
Byte Combination: 1 byte = 100 + ∆I
If N numbers of the successive same number have occurred
S – Run Length Encoding
then that is represented by 00 and (N)

● Temperature Control System:

A key requirement of any radiometric type solar radiation measurement system is that its
output does not change with the temperature. From the datasheet of the considered
phototransistor, it is observed that the output current has a temperature dependence around
approximately 1%/°C. To minimize this deviation, a temperature control system is utilized
with this measurement system. This control system uses an integrated temperature IC
(LM35) to measure the internal temperature and a Peltier module (TEC1-12706) to control
that at a defined value. The current design maintains the internal temperature maximum at
25°C and the microcontroller sends the control signal to the Peltier module through a relay
made power stage as shown in Fig. 3. To maintain the internal temperature more effectively,
the outside wall of the measurement system is made up of polyethene of 10mm thickness.
The microcontroller also takes the solar irradiance value as an input to stop the working of
this temperature control system during nighttime to optimize the energy consumption.

● Environmental Protection System:


One prime importance of the sensor station is the protection from different environmental
hazards like dust particles, wind, rain, etc. To overcome such a situation, the whole portable
system is enclosed with a flexible electro-mechanical arrangement. This arrangement opens
the flexible cover to expose only the sensor module and takes the reading of solar irradiance
at specified sampling intervals. After completion of the data collection, this arrangement
closes that flexible cover. This arrangement can ensure minimum exposure time of sensors
in to the environment which can protect the sensor module from different outside hazards.
The design of this arrangement is shown in Fig. 7.

Fig. 7.7. Electro-Mechanical arrangement to protect the sensor module from environmental hazards

7.2.2. Monitoring Station:

The monitoring station has application software that can be operated by any unskilled
personnel. This application software provides a Graphical User Interface to control the
sensor station for acquiring and visualizing the solar irradiance data as shown in Fig 8. Once
the “Data Acquisition” button is pressed in the graphical end, it established the
communication channel between the monitoring station and sensor station using
handshaking signals followed by sending a command signal to collect compressed solar
irradiance data. The rate of data reliability and data acceptance is checked for the
compressed data by formulating bit rate error and packet rate error respectively. Upon
receiving reliable and accepted data it will pop up a “Data Acquisition Completed”
message. Once the “Data Visualization” button is pressed in the graphical end, it
decompresses those compressed characters and represents original solar irradiance data
samples. The decompression algorithm was performed by reverse logic of the DMBREL
compression algorithm (described in the next section.
Number of inaccurate bits
BRE=
Total number of bits received

Number of inaccurate packets


PRE=
Total number of packets received

Fig. 8. Graphical User Interface of the monitoring station to acquire and visualize global solar
irradiance data

● Decompression Algorithm:

The decompression technique is performed to extract the original data stream from the compressed
character. It is actualized by decompression of S-RLE technique followed by magnitude encoding and
FSD array. Whenever a data combination of 00 followed by a number is received, successive same
samples are retrieved from that combination. Similarly, data between 00 - 99 is preceding for
extraction from nibble combination and data greater than 100 for byte combination. Finally, original
data stream is computed by taking the values of FSD elements, mean value with the original sample.
The summarized rules for decompression are given in Table4.

Table 7.4: Summarized Decompression Algorithm


Compressed Element Interpretation for Decompression
00 byte followed by another byte S-RLE
Byte between00 and 99 Nibble Combination
Byte greater than 100 Byte Combination

7.3.1. Computation of Directional Response Error:

One of the fundamental characteristics of any solar irradiance measurement system is the cosine or
directional response. The output response of a reliable solar irradiance measurement system should
not oscillate with an axial variation. By Lambertian or directional response testing, the degree of
precision is calculated to rectify solar irradiance into its normal component. To compute this
characteristic, sunlight is considered as an ideal source but it is very difficult for continuous
measurement of solar irradiance data due to lack of clear sun periods. The system with a single
phototransistor shows a normalized directional error of around 13.96% due to a narrow reception
angle. To enhance this response, five phototransistors are used in an arrangement such that four
phototransistors are placed at an equal distance equivalent to their reception angle from the central
one on a hemispherical surface as shown in Fig. 10 & 11(a). Among four sensors, two are connected
in the east-west direction for the zenith component correction and the other two in the north-south
direction for azimuth component correction of the Sun. All the phototransistors are connected in a
parallel connection to collect the overall current from them. By using five sensor arrangements, the
system enhances the total reception angle therefore the overall directional response. Fig. 11(b) shows
the directional response of this arrangement which mostly follows the ideal directional response
curve. The overall error for this arrangement is around 6.77% up to 90° which is better than a single
sensor. Table 7.5 represents the comparative study between developed arrangements and other solar
irradiance measurement systems in terms of directional response error.

R ( Z ) + R(−Z )
−ZERO( Z )
2
∆ directional= × 100 %(17)

[ R ( 0 )+ R (0 )
]
0 0
−ZERO ( Z ) cos (Z)
2

Where R (00): Pyranometer response at 00 angle of incident, R (Z): Pyranometer response at Z 0 angle
of incident and ZERO (Z): Pyranometer response on dark signal at Z 0 angle of incident.
Fig. 7.10. Apparatus is used to determine the directional response of the developed system.

Fig. 11. (a) Arrangement with five phototransistors; (b) Normalized directional response of the
developed system.

Table 5: Comparison of developed system with other measurement system in terms of directional
response error
Solar Irradiance Measurement System Directional Response Error
PSP (Eppley Lab) (Dunn et al., 2012) < ± 1 % up to 0 - 70º
CM 11 (Kipp & Zonen) (Dunn et al., 2012) < ± 3 % up to 70 - 80º
< ± 2 % up to 0 - 70º
STAR (Weahertronics) (Dunn et al., 2012)
< ± 5 % up to 70 - 80º
± 2 % up to 60º
RTP (Maetincz et al., 2009)
± 6 % up to 80º
± 2.04 % up to 60º
Developed Arrangement ± 4.47 % up to 80º
± 6.77 % up to 90º

Calibration:

The main objective of the calibration process is to make the system compatible to represent
its output in terms of global solar irradiance signal. As per the Indian standard (ISO 9847:
1992), calibration can be done by either the outdoor or indoor calibration method. The
outdoor calibration process is carried out by taking the comparison between developed
arrangement with a reference pyranometer at the same sampling rate of 5 minutes. The
pyranometer from Delta Ohm (LP PYRA 10) is considered here as the reference one, which
is categorized as a “secondary standard” according to ISO 9060:1990 and adopted by World
Meteorological Organization (WMO).

First, both the instruments are placed on the same location (Latitude: 22º 34' N, Longitude:
88º 24' E, Elevation: 11m) at a common tilt angle. Instantaneous readings from both
instruments are taken at 5 minute time intervals with cloudless stable sky situation over 5
days consecutively. For each sampling interval Calibration Factor [CF (j)] is calculated and
finally, the calibration multiplier [CM] is computed for all sampling intervals.

n
CF S ∑ ❑ V SP
i=1
CF ( j ) = n

∑ ❑ V DS
i=1

m
1
CM = ∑ ❑CF ( j )
m j=1

[Where CF s = Calibration factor of the standard pyranometer, V SP = Instantaneous voltage


of the standard Pyranometer, V DS = Instantaneous voltage of the developed system, n =
number of reading in one sampling interval, m = total number of sampling interval]

After calibrating the developed system, the prime importance is to assure the acceptance
level by computing the mean absolute error and coefficient of determination. Response of
the developed system after calibration along with standard pyranometer (maker: Delta
Ohm, model: LP PYRA 10) for a single day is represented in Fig. 7.12. The developed
arrangement depicts the mean absolute error and coefficient of determination is 1.27 and
0.999 respectively which justifies the calibration process. Table 7.6 represent the comparative
study between developed arrangement and other solar irradiance measurement system in
terms of mean absolute error.

Table 7.6: Performance analysis in term of mean absolute error

Solar Irradiance Measuring System Mean absolute error


BF3 (Wood et al., 2003) 4.7 %
UTP (Maetincz et al., 2009) 2.31 %
RTP (Maetincz et al., 2009) 1.54 %
Developed Arrangement 1.27 %
m
1
Mean absolute error= ∑ ❑|( X GSI , DS ( j ) −X GSI , SP ( j ) )|
m j=1

∑ ❑( X GSI , DS ( j ) −X GSI , SP ( j) ) 2
j
Coefficient of determination=1− ❑
∑ ❑( X GSI , DS ( j ) −x ) 2
j

[Where X GSI , DS ( j) = Response of developed system at j th time interval, X GSI , SP ( j) = Response


of standard pyranometer at jth time interval, x -= Mean of system response and m = number
of sampling time interval]

Fig. 12. Response of the calibrated develop system with standard pyranometer for a single day.

7.3.2. Testing of Compression Algorithm:

Firstly, we examined our developed compression algorithm on the simulation platform for
evaluating the performance index. Afterwards we executed that algorithm by using a low
power microcontroller ATmega32L. Here algorithm used yearly global solar irradiance data
sited by the National Renewable Energy Laboratory [NERL] with a sampling rate of 1 hour
from different locations across India. To figure out the compression performance, we used
Compression Ratio (CR) which is defined as:

Compression Ratio=100× ¿

This compression algorithm was modified from Simple FSD with no bias to FSD with bias
and S-RLE by four different intermediate stages to achieve the best performance and
represented in Table 7.7. It has been observed their stepwise effect in the CR is a large non-
linear in nature for the same data set. Obviously, First Sample Difference with Bias and S-
RLE afford the better CR than others.
The quality of the recovered signal is demonstrated in Table 7.8 by computing the
Normalized Root Mean Square Error (NRMES) of the reconstructed signal.

NRMES=
√ mean((x− x^ )¿ ¿ 2) ¿
( x ) −(x )

Where x and ^x are represented as original and reconstructed signals respectively.

Table 7.8 represents the performance index of the developed compression algorithm among
some other compression algorithms, which are either lossless or lossy in nature. The
developed compression algorithm is rigorously monitored with global solar irradiance data
of different geographical locations across India. The performance of the developed
compression algorithm is shown in Fig. 7.13 for the global solar irradiance data of the
Kolkata region.

Fig 7.13: Comparison between original and reconstructed signal after compression-decompression

Table 7: Comparative study between developed algorithm and other compression algorithms.

Compression Algorithm Compression Ratio


RLE 17%
S-LWZ 53%
K-RLE (K=2) 56%
DMBRLE (Our developed Algorithm) 74.56%

Table 7.8: The performance evaluation of the developed compression algorithm

Location Latitude Longitude CR (%) NRMES


Kolkata 220 38’ N 880 38’ E 75.04 0
Amritsar 310 37’ N 740 55’ E 74.40 0
Ahmadabad 230 03’ N 720 40’ E 74.04 0
Begumpet 170 43’ N 780 46’ E 74.94 0
Jaipur 260 55’ N 750 52’ E 74.49 0
Lucknow 260 55’ N 800 59’ E 74.87 0
Nagpur 210 09’ N 790 09’ E 74.80 0
Trichy 100 80’ N 780 68’ E 75.05 0
Varanasi 250 28’ N 820 95’ E 74.66 0
7.3.3. Performance Analysis of the developed system:

Precision is one of the most important performance characteristic of any measurement


system. The precision of the developed system is computed by exposing the sensor module
in front of a light source of 500 watt halogen-free xenon arc lamp constantly for a couple of
time as shown in the Fig. 7.14. The developed system expressed the percentage of
uncertainty is 0.027 W/m2 (shown in Table 7.9), which is within the acceptable range.

√ ( )
❑ 2

❑ ∑ ❑ X GSI ,DS ( j )
∑❑ X GSI , DS ( j )− j
m
j

m−1
Precision= ❑
∑ ❑ X GSI , DS ( j)
j
m

Where X GSI , DS and m represents the solar irradiance value measured by developed system
and numbers of measurement values respectively.

Fig. 14. Apparatus is used to determine the precision of the developed system.

Table 9: Performance analysis in term of Precision


Solar Irradiance Measuring System Precision
RMP 001 (Medugu et al., 2010) 0.024
Developed Arrangement 0.027

Some random daily solar irradiance values at 30 min intervals in different weather
conditions are represented in Fig. 15. These data was taken by the developed system as well
as standard pyranometer (maker: Delta Ohm, model: LP PYRA 10) at our University campus
(Latitude: 22º 34' N, Longitude: 88º 24' E, Elevation: 11m).

The short term performance of the developed system is measured by computing statistical
errors. The statistical error in terms of mean bias error and root mean square error is worked
out by those collected data using following equations.


∑ ❑( X GSI , DS ( j)−X GSI , SP ( j) )
j
Mean bias error=
m



∑ ❑ ( X GSI , DS ( j)− X GSI ,SP ( j)) 2
j
Root mean square error=
m

Where X GSI , DS and X GSI , SP represents the solar irradiance value measured by developed
system and standard pyranometer respectively.
Fig. 15: Random daily solar irradiance values at 30 min interval in different weather condition. Blue
line represents the reading of the standard pyranometer and red dashed line indicates the response of
the developed system at a common location.

The short term performance of the developed system is measured by computing statistical
errors. The statistical error in terms of mean bias error and root mean square error is worked
out by using following equations.


∑ ❑( X GSI , DS ( j)−X GSI , SP ( j) )
j
Mean bias error=
m



∑ ❑ ( X GSI , DS ( j)− X GSI ,SP ( j)) 2
j
Root mean square error=
m

Where X GSI , DS and X GSI , SP represents the solar irradiance value measured by developed
system and standard pyranometer respectively.

The ideal measurement system expressed both mean bias error and root mean square error
value is equal to zero. So, lower values for mean bias error and root mean square error are
desired for an effective measurement system. The developed system depicts the value of
mean bias error and root mean square error is -0.2819 W/m 2 and 2.40 W/m2 respectively,
which are quite acceptable for solar irradiance measurement systems. Finally, the confidence
level of the developed system is calculated by the statistic parameter. Table 7.10 represents
the statistical performance of the developed system compare with other systems.

[ ]
1
( m−1 ) MBE2 2
t−statistic=
( RMSE 2−MBE2 )
[Where X GSI , DS ( j) = Response of developed system at j th time interval, X GSI , SP ( j)= Response
of standard pyranometer at jth time interval and m = number of sampling time interval]

Table 10: Statistical performance of the developed system compare with other systems.

Solar Irradiance Measuring Mean bias error Root mean square error t-statistic
System (W/m2) (W/m2)
RMP001 (Medugu et al., 2010) -0.19 0.72 1.04
RMP003 (Burari et al., 2010) 12.59 43.60 1
Developed Arrangement -0.2819 2.40 1.99

8. Solar Radio Burst Detection System

8.1 Introduction:

The Sun plays a pivotal role to help us understand the remaining of the astronomical universe as it the
Sun has close proximity to the Earth which enables us to study its surface and all of its activity in
exquisite detail. Hence, it acts as a paradigm of sun-like stars.
Studying and modelling diverse manifestations of the Sun is not solely for the academic purpose for
increasing the extent of our understanding of the Sun’s nature but also in comprehending the Sun-
Earth connection. The solar explosive and transient events emerging from the sun’s corona eject high
energetic particles into interplanetary space which if aimed towards the Earth present a substantial
danger to the very existence of life. Hence, the need to study and understand these energetic events is
of prime importance in view of Space Weather. ‘Space weather describes the dynamic and highly
variable environmental conditions on the Sun, in the interplanetary medium and in the ionosphere-
magnetosphere system to the ground (Baker, 2005; Singh et al., 2010). The aim of space-weather
studies is to predict solar variability and to understand its impact on life on Earth (Strong et al., 2012).
The solar energetic events significantly alter the Earth’s radiation environment by prompt heating
(Lean, 1997) and is detrimental to the scientific instrumentation at the spacecrafts and payload
(McKenna-Lawlor,2008). Furthermore, the shock waves in association with solar energetic events
play a significant role in compressing the Earth’s magnetosphere and triggering the geomagnetic
storms (Schwenn, 2005) in the magnetosphere and ionosphere. The solar radio bursts linked with
solar activity leads to disruption of the satellite-to-ground communication and radar system
(Messerotti, 2008) along with GPS navigation system failures (Chen,2005). Radio signal detection
systems are very advantageous in solar observation for understanding the solar eruptive events such as
solar flares and CMEs in the solar atmosphere using the linked radio bursts. Sun is a highly variable
source and detection of radio signatures of the Sun offers the best probable approach for probing the
science linked with such type of events. Transient events of the Sun can occur across a broad
frequency range (≈10 kHz − 10 GHz), the majority of which are observable with good contrast at
frequencies 500 MHz (Benz, 1993) from ground-based instruments. Emission from the
Sun at radio frequency originates at a various heliocentric distance (r) in the solar atmosphere owing
to the intrinsic electron density gradient. The burst’s structure alters from one frequency to another
due to the above characteristics. Hence, for obtaining data of radio emission in relation to flares and
CMEs, continuous monitoring across a broad frequency span is required. To suffice this purpose, a
variety of radio telescopes have been developed utilizing the antenna (depending on the concept of
transmission lines) as the elementary receiving component.
Antenna plays an essential role in any system where the communications are wireless in nature. The
definition of Antennas in accordance with IEEE standard is, “antennas are defined as a means that can
radiate or receive radio waves”. The signal from the transmitter side is transmitted via space using a
transmitting antenna in the form of electromagnetic energy which is captured by the receiver side
antenna or vice-versa. This in turn induces a voltage in the antenna (normally a conductor). The rf
voltages induced in the receiving antenna are then passed to the receiving device which then converts
the transmitted rf information back to its original form. Thus, the antenna can be considered as a
transducer that is responsible for converting a guided electromagnetic wave to free space waves and
vice-versa. In general, any piece of structure or conducting wire act as an antenna, however, the
radiation characteristics depends in accordance with the dimension of the structure used. Antennas are
the elementary receiving component of a radio telescope and can come in different types as well as
dimensions. According to the reciprocity theorem, the characteristics of an antenna when it is
transmitting should precisely match when it is used as receiving antenna.

8.2 Different Types Of Antennas:


In this section, different forms of antennas have been introduced and are being discussed briefly
(Balanis, 2016).
8.2.1 Wire Antenna:
These types of antennas are very common as they are seen virtually everywhere. Wire antennas are
one of the oldest types. They are easy to construct, are cheaply available and are most versatile in
many areas of application. A wire antenna consists of an elongated wire hung above the ground. The
length of the wire does not depend on the wavelength of radio waves used but solely for ease of
convenience. The wire might be straight or may be stretched back and forth in between walls or trees
or maybe straight, just to expose enough wire in the air thereby forming a zig-zag pattern. Wire
antenna can take various shapes such as straight wire (dipole), helix, loop. Apart from circular form,
loop antennas can take any form such as ellipse, square, rectangular or any other configuration.
Various forms of wire antenna have been shown in Figure. 8.1. Wire antennas find its use as receiving
antennas for short, long and medium wave bands while as transmitting antennas for small outdoor
applications or in situations where it is not possible to install a more permanent antenna.
8.2.2
A
p
e
r
t
u
r
Figure 8.2 Different types of aperture antenna (source: Balanis, C.
e
A. Antenna theory: analysis and design. John Wiley & Sons, 4th
edition)
Antenna:

These types of antennas may be more familiar to us today than in the past owing to the
increase in the requirement of more advanced forms of antennas as well as the use of high
frequencies. This type of antenna has found its use in spacecraft and aircraft applications as
they can be very easily flush-mounted on the spacecraft or aircraft skin. Additionally, the
aperture antennas can be coated with a dielectric material for protecting them in hazardous
conditions of the environment. Various type of aperture antenna has been shown in Figure

8.2

Microstrip Antenna:

This type of antenna was in demand during the 1970s owing to its spaceborne application. Today they
are utilized in commercial and government applications. The basic structure of the microstrip antenna
comprises of metallization area mounted over a grounded dielectric substrate and fed against the
ground at a suitable position. The metallization area can take various forms. But the circular and
rectangular patch is most favoured because with ease they can be fabricated and analyzed and has
good radiation characteristics. Because of the various advantages associated with this type of antenna
such as simple and inexpensive fabrication utilizing printed circuit technology, can be conformed to a
nonplanar and planar surface, mechanically robust when installed on rigid surfaces and very versatile
in terms of performance characteristics make it suitable for using it on the surface of spacecraft,
satellites, high-performance aircraft, mobile devices and even in cars.

Figure 8.3 Different types of Microstrip antenna (source: Balanis, C. A. Antenna theory: analysis and design. John
Wiley & Sons,4th edition)
8.2.3 Reflector Antennas: The success in the exploration of outer space triggered the expansion
in the concept of the antenna theory. The necessity to communicate across a great distance leads to the
development of sophisticated antennas that can transmit and receive signals travelling millions of
miles. That purpose gave rise to parabolic reflector type antennas. This type of antenna has been
constructed with large diameters for achieving high gain which is needed when transmitting or
receiving signals over millions of miles of travel. A typical parabolic reflector has been shown in
Figure.8.4.
8.2.4
8.2.5 Lens Antenna:
Lens are mainly utilized for collimating the incident divergent signals so as to prevent it
from scattering in different directions. Proper geometrical configuration along with the
selection of suitable lens material, the arrangement can alter different divergent energy
forms into plane waves. Similar to the parabolic reflectors, the lens antenna finds its use in
high-frequency applications. For low-frequency operations, the dimensions, as well as the

Figure 8.4 Typical configuration of Reflector type antenna (source: Balanis, C. A. Antenna theory: analysis
th
weight of the lens antenna, become very large. These types of antennas are categorized
depending on the geometrical shape and the material used for their construction. A typical
lens antenna has been shown in Figure 8.5.

Figure 8.5 Typical configuration of Lens type antenna (source: Balanis, C. A. Antenna theory: analysis and design.
John Wiley & Sons,4th edition)

8.2.6 Log Periodic Antenna:


Diverse applications of electromagnetics in the advancement of technology have warranted us to
explore and utilize the majority of the electromagnetic spectrum. Additionally, with the advent of
broadband systems, the demand to design broadband radiators also increased. The utilization of
simple, lightweight, small and economical antennas which is designed to be operational over the
whole frequency span of a given system is what is mostly desired. One such antenna is the log
periodic antenna which is multi-element and directional in nature and is designed to be operative
across a broad frequency span. The LP antenna comprises of a number of half-wave dipole driven
elements that increase progressively in length. The LP antenna comprises pairs of metallic rods which
are either connected in a straight or crisscross manner or are being connected by a coaxial cable.
Figure 8.6 Typical Log
Periodic antenna with
different connection
arrangement (source:
Balanis, C. A. Antenna
theory: analysis and
design. John Wiley &
Sons,4th edition)

8.3 How Does Antenna Radiate?

The structure linked with the transition region between guided as well as free-space wave or vice-
versa is termed as an antenna. While the device is responsible for either guiding or transmitting radio-
frequency from one place to other. In general, it is tactical that the energy transmission should take
place with less attenuation, radiation and heat losses as possible. This is in turn indicate that when the
energy transmission is being done from one place to another, it should either be restricted to the
transmission line or should be bounded nearly to it. Hence, the transmitted wave through the line is
dimensional which means that it adheres to the line rather than spreading out in space. The infinite
lossless transmission line creates a consistent travelling wave across the line when a generator is
connected to it. The outgoing waves are reflected when the line is short-circuited thereby generating a
standing wave on the line owing to the interference amidst the reflected and outgoing waves. A
standing wave comprises of local concentration of energy. When the reflected wave is equivalent to
the outgoing wave then we achieve a pure standing wave. The concentration of energy for such type
of wave fluctuates from completely electric to completely magnetic and back twice/cycle. This type of
behaviour is observed in a resonant circuit or in other words it is called a resonator. Hence the term
resonator is generally used for devices with reserved energy concentrations that are large in
comparison to the net energy flow/cycle. Hence, the antenna is responsible for receiving or
transmitting energy, transmission lines guide energy and the resonators are responsible for storing
energy. The guided wave that travels across the transmission line opens out as depicted in Figure 8.7
will radiate as a free-space wave.

Figure 8.7 Generator, transmission line, antenna and separation of energy as free space wave (source: Kraus, J. D., &
Marhefka, R. J. Antennas for all Applications; 2nd edition)

The wave across the transmission line is a plane wave and the free-space wave coming out are a
spherical expanding wave. The energy is guided as a plane wave having low loss when the energy is
travelling across the uniform part of the line, such that the distance between the wires is a small part
of the wavelength. However, when the transmission line separation reaches a wavelength or more, the
wave radiates out such that the open outline behave as an antenna that initiates the free space wave.
The current traveling across the transmission line terminates when flowing out of it but the associated
field continues to move.
8.4 Antenna Parameters :
For describing the performance of an antenna, knowledge of various performance parameters is
necessary in order to determine whether or not the designed antenna is suitable for the required
application. In this section, some of the important antenna parameters are being discussed which is
applicable for all types of antennas.
8.4.1 Input Impedance:

(a) (b)

The ratio of relevant components of electric to the magnetic field at a point, or ratio of voltage to
current at a pair of terminals defines the antenna input impedance.

Input impedance ( Z a) of the antenna (under no-load condition) comprised of real and imaginary part
which is written as,
Z a=Ra + j X a (8.1)
where, Ra : input resistance, X a: input reactance
Due to reciprocity, the antenna impedance is similar to the transmitting and receiving operations. Ra
denotes dissipation that takes place in two ways. One part is when power leaves the antenna and does
not return (i.e. radiation) while another part is some ohmic losses similar to lumped resistor. X a is the
power reserved in the antennae near field.
Generally, the resistive
Figure 8.8part comprised
(a) When antenna of 2-components,
is transmitting the wave (b) Thevenin equivalent circuit

Ra =Rr + R L (8.2)
where, Rr : radiation resistance, R L: loss resistance
Assuming that the antenna is connected to a source having internal impedance ( Z g)
Z g=R g + j X g (8.3)
then Figure 3.8 (a) is reduced to the equivalent circuit as shown in Figure 3.8(b). For finding the
amount of power supplied to Rr for radiation as well as the heat dissipation in R L we calculate the
power supplied to the antenna in order to radiate which is given by,
1 2
Pr = R r|I g| (8.4)
2
As well as heat dissipation is given by,
1 2
P L= R L|I g|
2
The rest of the power is the amount of heat dissipation on internal impedance of source
excitation which is given by,
2
|V g|
Pg =
2
The max power transfer occurs when there is proper match of the antenna
R g=Rr + R L
X a=−X g
Given, antenna is joined to driving circuit through a transmission line having characteristics
impedance Z o, then there should be a matching of characteristics impedance between the
antenna and transmission line.
Z a=Z o ; Z o=Rr + R L ; X a=0
Z¿ indicates internal impedance when faced towards the terminating side of a receiving
antenna. Generally, Z¿ ≠ Z a. For modelling the equivalent circuit of a receiving antenna, the
internal impedance is utilized similar to when modelling the equivalent circuit of a
transmitting antenna using input impedance.
8.4.2 Reflection Coefficient:
The reflection coefficient of the transmitting or receiving antenna is given by,
Z a−Z o
Γ= (for transmitting antenna )
Za+ Zo

Z a−Z ¿
Γ= (for receiving antenna )
Za+ Z¿

ρ is a dimensionless parameter which can either be calculated and measured. The magnitude
of ρ is between 0 and 1. When there is a mismatch between the transmission line
characteristics impedance as well as transmitting antenna input impedance ( Z 0 ≠ Z a ), the
return loss occurs due to the reflection of the wave at the terminals of the antenna. When
expressed in dB, the reflection coefficient is a negative value.
8.4.3 Return Loss:
The return loss is the amount of signal which is reflected back when the impedance does not
match expressed as,
Return loss= -20 log|Γ| (dB)
Higher the value of the return loss more power will be delivered to the antenna.
8.4.4 Voltage Standing Wave Ratio:
In order to provide the antenna with maximum power, the impedance of the transmitting or
receiving antenna should match with the transmission line characteristics impedance. Hence,
VSWR is the parameter that numerically explains how well the antenna impedance matched
with that of the transmission line it is connected to. VSWR is defined as,
1+|Γ|
VSWR=
1−|Γ|
and is a function of the reflection coefficient. VSWR is a positive and real number for an
antenna having the minimal value of 1.0, (an ideal condition where the antenna does not
reflect any power). The smaller the value of VSWR, the better is the matching of the
transmission line with that of antenna impedance, hence more power will be delivered to the
antenna.
8.4.5 Radiation Pattern:
The antenna or radiation pattern is a graphical representation of a mathematical function of
antenna radiation properties as a function of spherical coordinates. For the majority of the
instances, antenna pattern is obtained from the far-field region, which is denoted as a function
of directional coordinates. 3-D antenna patterns can be estimated using a spherical coordinate
system denoting the radiation power strength in the far-field sphere enclosing the antenna.
Hence, x-y plane (φ measurement when θ=90o ) represents the azimuthal plane which
contains the magnetic field vector (H-plane). While the x-z plane (θ measurement when
o
φ=0 ) represents the elevation plane which contains the electric field vector (E-plane). The H
and E plane represents the direction of maximum radiation. Figure 8.9 shows a 3-D radiation
pattern plot of the half wave dipole.
Figure 8.9 A typical 3-D antenna pattern plot of the half wave dipole (source: Rahman, I., & Aftab Uddin Tonmoy, S.
(2014). Designing Of Log Periodic Dipole Antenna (Lpda) And It’s Performance Analysis)

From Figure 8.9 it can be seen that the maximum radiation is at θ=90o plane or for any φ
values across the azimuthal plane. At θ=0o and 180 o, the nulls in the radiation pattern
occurs (i.e., across the z-axis, the end of dipole). Figure 8.10 depicts the 2-D antenna pattern
which is plotted on a polar plot for changing θ at φ=0 o and changing φ at θ=90o
respectively. This has been depicted in Figure 8.10.

Figure 8.10 A typical 2-D polar plot of the half wave dipole (source: Rahman, I., & Aftab Uddin Tonmoy, S. (2014).
Designing Of Log Periodic Dipole Antenna (Lpda) And It’s Performance Analysis)

In fact, Figure 8.10 is just the two cuts of the radiation pattern and the 3-D antenna radiation
pattern can be easily illustrated from these 2-dimensional polar plots. The models as well as
the patterns depict the parameters of half-wave dipole which is nearly regarded as an
omnidirectional radiator. the isotropic source is only considered as the true omnidirectional
radiator which only exists theoretically. Hence, an isotropic radiator is defined as a
hypothetical lossless antenna that emits radiation equally in all directions.
8.4.6 Radiation Pattern Lobes And Beamwidth:
Different sections of the antenna pattern are termed as lobes. The lobes can be further
classified as major or main lobes, side lobes, minor and back lobes. A radiation lobe is part of
the antenna’s radiation pattern which is circumscribed by regions of rather weak radiation
intensity.
The major lobe (or main beam) represents the direction across which the field strength is
maximum hence termed as the direction of maximum radiation. The direction of the major
lobe denotes the antenna directivity. The radiation lobe apart from the main lobe is the
minor lobes which are comprised of the side as well as back lobes. Back lobes occur exactly
in opposite direction to the main lobe. Both sides as well as the back lobes denote the
radiation is the undesirable direction and hence should be reduced as much as possible.
Linked with the antenna’s radiation pattern is the beamwidth which is the separation angle
between two equal points opposite to the antenna pattern maximum. The number of
beamwidths exists in the radiation pattern of the antenna of which the Half Power
Beamwidth (HPBW) is most broadly utilized. The HPBW is the separation angle between
the direction where the field strength decreases to 1/√ 2 of its max value.

Figure 8.11 Polar form representation of Radiation Lobes and beamwidths of amplitude pattern of an antenna
(source: Balanis, C. A. Antenna theory: analysis and design. John Wiley & Sons,4th edition)
As the power density of wave is directly related to the square of the electric filed, thus the
power density decreases to its max value when there is the reduction of an electric field to its
max value. In other words, a reduction of -3 dB in power density. Hence, the HPBW is also
known as 3-dB beamwidth. The other crucial beamwidth is the First-Null Beamwidth
(FNBW) which is defined as the angle of separation between the first nulls of the radiation
pattern.
The local maxima in the antenna pattern are termed as the side-lobes of the antenna pattern.
In ideal case, the antenna should radiate across the direction of the major beam but the
presence of side-lobes denotes the power leakage in unwanted directions. The side lobes are
normally an undesirable parameter when the radiation pattern of the antenna is concerned.

8.4.7 Directivity Of An Antenna:


It provides a measure of how the antenna directs the power in a required direction in
comparison to other directions. This particular parameter is used for comparing the various
antenna performance. The directivity of the antenna is defined as,
radiation intensity G∈ a specific direction(θ , φ) U (θ , φ)
U ¿)= =
radiation intensity averaged across all direction (U avg ) U avg
Here, radiation intensity G ¿) represents the power radiated in specific direction per unit
solid angle. Then it can be written as,

2
U ¿) = U m|F (θ , φ)|
2
Where, Gmdenotes maximum radiation intensity and |F (θ , φ)| is the antenna power pattern

which is normalized to max value of unity. Integration of radiative intensity across all angles
rounding the antenna gives the Total radiated power
2
P=∬ U ¿) dΩ = U m ∬ |F (θ , φ)| dΩ
Gavg is the average radiation intensity over 4 π solid angle and is given as,
1 P
U avg = ∬ U ¿) dΩ = )
4π 4π
Hence, we can write Eq. (8.15) as
U (θ , φ)
4 πU (θ , φ)
D¿) = P =
P

8.4.8 Antenna Efficiency:
Due to ohmic losses on the surface of the antenna, a portion of the power given to the
antenna terminals get lost due to the antenna heating. As a result, the total power given to
the antenna is not radiated. Thus, antenna efficiency is expressed as
power radiated by the antenna P P
a eff =¿ antenna¿ = =
the total power supplied ¿ P¿ P+ P o

Where, P¿= input power= power supplied to the antenna; Po = power dissipation owing to
ohmic loss on the antenna; P= radiated power
8.4.9 Antenna Gain:
Another useful parameter describing the antenna performance is the gain. While, antenna
gain has close association with the directivity, it also considers the directional capabilities as
well as the antenna efficiency. The antenna gain (for a specified direction) is defined
expressed as,
radiation intensity 4 πU (θ , φ)
Gain= 4 π =
total power supplied P¿
8.4.10 Bandwidth:
The frequency range within which the functioning of the antenna in relation to some
antenna parameters agrees with the specified standards gives the antenna bandwidth. The
bandwidth is regarded as the frequency range on either side of the central frequency where
the characteristics of the antenna have an acceptable value corresponding to the central
frequency. In simple words, when the signal is transmitted and received, it is done over a
frequency range. This specific frequency range is allocated to a particular signal such that
the other signals do not intrude when the signal is transmitted.
8.4.11 Polarization Of Antenna:
Antenna polarization for a particular direction is defined as wave polarization radiated (or
transmitted) by the antenna in the far-field region. Polarization can be categorized as linear,
circular and elliptical. If the electric field vector has movement both in forward and
backward direction in a line then it is linearly polarized. If the electric field follows an
elliptical pattern, hence the field is said to have elliptical polarization. If the electric field
stays uniform in length but traces a circular path is said to be circularly polarized. This has
been depicted in Figure 8.12.
Figure 8.12 (a) Linear Polarization (b)
Circular polarization (c) Elliptical
polarization (Source:
https://nptel.ac.in/courses/117/107/1
17107035/)

8.5 Frequency Independent Antennas:


Diverse implementation of electromagnetics in technological advancement have warranted
the exploration and use of the majority of the electromagnetic spectrum. Additionally, the
development of broadband systems have necessitated the design of broadband radiators.
Hence, the utilization of the simple, lightweight, as well as low-cost antenna developed for
operating across the entire band of frequency of a given system, would me most desirable.
One such antenna which is very popular for broadband application is the Log Periodic
Dipole Antenna (LPDA) depicted in Figure 8.13.

Figure 8.13 Schematic of Log Periodic Dipole Antenna (LPDA) (source: Rahman, I., & Aftab Uddin Tonmoy, S. (2014).
Designing Of Log Periodic Dipole Antenna (Lpda) And It’s Performance Analysis)
In the current dissertation, an attempt has been made to design the LPD Antenna (receiving
element) as a first step towards the designing of a solar radio burst detection system. A Log
Periodic Antenna is a system of driven elements (dipoles) that are stacked together as shown
in Figure 8.13. The LPDA’s are constructed for operating across a wide band of frequencies.
The benefit of using Log periodic structures is that they display essentially stable antenna
characteristics across the frequency band. Not all elements in the antenna system are
operational for a specific frequency but the design is such that the active region switches
through the elements when the operating frequency changes. There are different types of
log-periodic structures such as dipole, trapezoidal, planar, zig-zag, slot and V types.
However, the most favoured by amateurs are the log-periodic dipole arrays because of its
stable characteristics across the operating frequency range. They are made to operate over a
span of frequency depending on the design parameters. Across the desired range, the
electrical characteristics such as gain, front-to-back ratio, feed point impedance and so forth
stay approximately constant. But the same is not true for other types of antennas. For
example, the Yagi-UDA or quad antennas' front-to-back ratio or gain or both alters when the
operative frequency deviates from its optimum design frequency which also leads to an
increase of SWR. Even the terminated type antenna suffers a significant alteration in gain
when the operative frequency deviates.
In Figure 8.13 it can be seen that the log periodic antenna comprises of several dipole
elements each of which are of varying lengths and spacings. The dipole lengths and the
spacing between them smoothly increases in dimension as one moves from the smallest
element to the largest one. It is upon this feature, the design of the LPDA is based on, which
allows the alterations in frequency without much hampering the electrical operations of the
antenna. Each of the individual elements are being connected alternately to a distributive
type feeder system. This means that the antenna elements are connected in opposition with
each other such that the total radiation should be produced towards the direction of the
shorter elements while the radiation towards the larger side should cancel out. A coaxial line
is used for this purpose which runs through the feeder from the longest to the shortest
antenna element. The radiated energy at a specific frequency, travel through the feeder till it
arrives at the location where the dipole electrical lengths, as well as the phase relationships,
are such that the radiation is produced. With frequency change, the resonant element
position switches smoothly from one element to the following elements. The lower and
upper-frequency limit is then found out by the lengths of the longest and shortest elements
or on the contrary, the lengths must be selected in such a way that the bandwidth need is
satisfied. The longest element corresponds to approximately ¼ wavelength at the lower
frequency limit while the half element in the shorter side should be approximately ¼ λ at the
higher frequency limit in the required operational bandwidth.
8.5.1 Design Equation Of Log Periodic Antenna:
The Log periodic structures do not depend on frequency such that the electrical parameters
change periodically with the logarithm of the frequency. When the frequency, f1, shifts to
frequency, f2, within the antenna passband then the relationship is
f1
f 2=
τ
Where τ = design parameter; a constant; τ <1.0
f1
f 3= 2 (8.22)
τ

f1
f n= (8.23)
τ n−1
Where f 1= lowest frequency and f n= highest frequency
The design constant τ is related to the element length ‘l’ as well as the inter element spacing
‘d’ by:
l 2=τ l 1 (8.24)

l 3=τ l 2 (8.25)

l n=τ l n−1 (8.26)

Where l n= length of shortest element


d 23 =τ l 12 (8.27)

d 34 =τ l 23 (8.28)

d n−1 ,n =τ d n−2 , n−1 (8.29)

Where d 23 = spacing between element 2 and 3


The individual element has 180o a phase shift by shifting or alternating element connections
as shown in Figure 3.13. When the antenna is nearly in halfway of its operational span, the
output closer to the input being nearly out of phase and closely spaced cancel out each other
radiation. As the spacing of the element increases as one travels down the antenna, a point
arrives when the phase delay within the feeder along with the 180o phase shift contributes
to a total phase shift of nearly 360o which is called the antenna active region. Hence, the
radiation coming out of the antenna dipoles are in phase which is directed towards the
antenna apex resulting in lobe formation as illustrated in Figure 8.14. In actuality, the active
region comprises of more than one element. The actual number is determined by the design
constant ‘τ ’ and angle ‘α’. The element external to the active region gets minimal direct
power.

Figure 8.14 Configuration and antenna pattern of Log Periodic Dipole Antenna (LPDA) (source: Rahman, I., & Aftab
Uddin Tonmoy, S. (2014). Designing Of Log Periodic Dipole Antenna (Lpda) And It’s Performance Analysis)

Yet it has been seen that the largest elements display resonance beneath the operating frequency and
found to be of inductive nature while those at the front has capacitive nature. Hence, the element just
behind the active region behave as reflector while the front one’s act as directors. Assuming, a LPDA
is designed to be operative for a frequency range then the design should also incorporate an active
region of antenna elements for the lower and higher considered frequency limit having a bandwidth
Bar.

Bs=B × Bar (8.30)


fh
Where Bs is the structure bandwidth, B is the operating bandwidth determined by ( f is the highest
fl h
frequency, MHz; f l is the lowest frequency, MHz) and Bar is the bandwidth of the active region.
Element lengths falling outside the bandwidth of the active region does not participate in the antenna
operation. The gain has direct relation with the antenna directivity and is estimated by ‘τ’ the design
parameter and the relative element spacing constant ‘σ ’. Figure 8.15 illustrate the relation between ‘τ’
and ‘σ ’.
Increasing of τ leads to addition of elements while σ means a long boom. The relationship between τ
(design constant), σ (relative spacing constant) and α (apex angle) is given by,

σ= ( 14 ) ( 1−τ ) cot cot α (3.31)


d n ,n−1
Also, σ = (8.32)
2l n−1
σ opt =0.234 σ−0.051 (8.33)
And, α (apex angle) can be determined by,

cot cot α =
1−τ
(8.34)

Design
8.5.2 Figure Calculation
8.15 Contour Ofconstant
plot of the Log Periodic Antenna
directivity in dB versus τFor
and Capturing Solar
(The ARRL Antenna Radio
Book. Burst
21st Edition,
ARRL, Connecticut, USA.
Signal:

In order to design the log periodic antenna for capturing the solar radio burst signal,
following design procedure have been followed from The ARRL Antenna Book (21st Edition)

▪ Design Constant (τ)= 0.805


▪ Relative spacing constant (σ ) = 0.07
▪ Highest operating frequency ( f h)= 1300 MHz; Lowest Operating frequency ( f l)=50
MHz
fh
▪ Operating Bandwidth= = 26
fl
4σ 4 × 0.07
▪ Apex Angle (α) = cot cot α = = = 1.4359
1−τ 1−0.805
So, tan α = 1/1.4359 =0.6964
α = tan−1 (0.6964) = 34.85o

▪ Bandwidth of the active region ( Bar ¿=1.1+7.7 (1−τ)2 cotα

Bar = 1.1+7.7 (1−0.805)2 ×1.4359 = 1.5204


▪ Structure bandwidth (Bs )= B× B ar = 26 × 1.5204 = 39.531

▪ Length of the boom ( Lfeet ¿=


[( )
1
4
1−
1
Bs ]
cot cot α λ max

=
[(
1
4
1−
1
39.531 ) ]
1.4359 19.68 = 6.8852 feet ≈ 2.1 m

Where, longest free space wavelength ( λ max ¿= 984/ f l= 19.68


log log B s log log 39.531
▪ Total number of elements (N) = 1+ 1 = 1+ log log 1 = 17.23 ≈ 17
log log
τ 0.805
elements
492
▪ Length of the longest element (L1) = = 492/50 = 9.84 feet ≈ 2.9 m
fl
▪ Calculation of the remaining length of the elements
(L2) = L1 × τ = 2.9 × 0.805 = 2.335 m
(L3) = L2 × τ = 2.335 × 0.805 = 1.877 m
(L4) = L3 × τ = 1.877 × 0.805 = 1.511 m
(L5) = L4 × τ = 1.511 × 0.805 = 1.216 m
(L6) = L5 × τ = 1.216 × 0.805 = 0.979 m
(L7) = L6 × τ = 0.979 × 0.805 = 0.788 m
(L8) = L7 × τ = 0.788 × 0.805 = 0.634 m
(L9) = L8 × τ = 0.634 × 0.805 = 0.511 m
(L10) = L9 × τ = 0.511× 0.805 = 0.411 m
(L11) = L10 × τ = 0.411 × 0.805 = 0.331 m
(L12) = L11 × τ = 0.331 × 0.805 = 0.266 m
(L13) = L12 × τ = 0.266 × 0.805 = 0.214 m
(L14) = L13 × τ = 0.214 × 0.805 = 0.173 m
(L15) = L14 × τ = 0.173 × 0.805 = 0.139 m
(L16) = L15 × τ = 0.139 × 0.805 = 0.112 m
(L17) = L16 × τ = 0.112 × 0.805 = 0.090 m
▪ Calculation of spacing between the elements:
d 1=σ × 2 L 1= 0.07 × 2(2.9) = 0.406 m
d 2=σ ×2 L 2= 0.07 × 2(2.335) = 0.326 m
d 3=σ ×2 L3 = 0.07 × 2(1.877) = 0.263 m
d 4 =σ × 2 L 4 = 0.07 × 2(1.511) = 0.2115 m
d 5=σ ×2 L5= 0.07 × 2(1.216) = 0.170 m
d 6 =σ ×2 L6 = 0.07 × 2(0.979) = 0.137 m
d 7 =σ ×2 L7 = 0.07 × 2(0.788) = 0.110 m
d 8 =σ ×2 L8 = 0.07 × 2(0.634) = 0.0887 m
d 9 =σ ×2 L9 = 0.07 × 2(0.511) = 0.0715 m
d 10 =σ × 2 L 10= 0.07 × 2(0.411) = 0.0575 m
d 11=σ × 2 L 11= 0.07 × 2(0.331) = 0.0463 m
d 12 =σ × 2 L 12= 0.07 × 2(0.266) = 0.0372 m
d 13 =σ × 2 L 13= 0.07 × 2(0.214) = 0.0297 m
d 14 =σ ×2 L14 = 0.07 × 2(0.173) = 0.0121 m
d 15 =σ × 2 L 15= 0.07 × 2(0.139) = 0.0195 m
d 16 =σ × 2 L 16 = 0.07 × 2(0.112) = 0.0157 m
d 17 =σ × 2 L 17 = 0.07 × 2(0.090) = 0.0126 m
Length

Diameter=125.8

▪ Calculation of element diameter:


Element diameter (1) = 2.9/125.8 = 0.230 m
Element diameter (2) = 2.335/125.8 = 0.0185 m
Element diameter (3) = 1.877/125.8 = 0.0149 m
Element diameter (4) = 1.511/125.8 = 0.0120 m
Element diameter (5) = 1.216/125.8 = 0.0097 m
Element diameter (6) = 0.979/125.8 = 0.0078 m
Element diameter (7) = 0.788/125.8 = 0.0063 m
Element diameter (8) = 0.634/125.8 = 0.0050 m
Element diameter (9) = 0.511/125.8 = 0.0041 m
Element diameter (10) = 0.411/125.8 = 0.0033 m
Element diameter (11) = 0.331/125.8 = 0.0026 m
Element diameter (12) = 0.266/125.8 = 0.0021 m
Element diameter (13) = 0.214/125.8 = 0.0017 m
Element diameter (14) = 0.173/125.8 = 0.0014 m
Element diameter (15) = 0.139/125.8 = 0.0011 m
Element diameter (16) = 0.112/125.8 = 0.0009 m
Element diameter (17) = 0.090/125.8 = 0.0007 m
▪ Frequency shift between Antenna Element:
f 1=¿50 MHz
f 2=f 1/ τ = 62.11 MHz
2
f 3=f 1/ τ = 77.158 MHz
3
f 4=f 1/τ = 95.848 MHz
4
f 5=f 1/ τ = 119.066 MHz
5
f 6=f 1 /τ = 147.908 MHz
6
f 7=f 1 /τ = 183.732 MHz
f 8=f 1 /τ 7 = 228.244 MHz
8
f 9=f 1 /τ = 283.532 MHz
9
f 10=f 1/τ = 352.214 MHz
10
f 11=f 1/τ = 437.533 MHz
f 12=f 1/τ 11 = 543.518 MHz
12
f 13=f 1/τ = 675.179 MHz
f 14=f 1 /τ 13 = 838.733 MHz
14
f 15=f 1/τ = 1041.904 MHz
f 16=f 1/τ 15 = 1294.291 MHz
16
f 17=f 1/ τ = 1607.814 MHz

Designed Log Periodic Antenna:


Figure 8.16.: Our Constructed Log Periodic Dipole Antenna

8.6 Parabolic Dish Reflector:


Reflector antennas, in one or another form, have been utilized ever since the radio waves
were produced in 1888 by Heinrich Hertz. But, the art of assessing as well as designing such
antenna systems was developed mainly during the days of WW2 when numerous
applications of radar systems were evolved. In the wake of the increasing need for reflectors
for use in radio astronomy, microwave communication as well as tracking of satellites, rapid
progress was seen in developing sophisticated analytical and experimental methods in order
to shape the surface of parabolic reflectors as well as optimise illumination over their
aperture for maximizing the antenna gain. The utilization of the parabolic reflectors for deep
space application especially when it was being stationed on the surface of the moon, made
the parabolic reflectors almost a household name during the 1960s. Variations in reflectors
antennas are found but the most widely used is the parabolic or reflective antennas. The
simplest parabolic antenna comprises two components: a metallic reflector surface; a smaller
feed antenna at the focal point of the parabolic reflector.
8.6.1 Reflector Surface:
The surface of the reflectors is metallic in nature and structured into a paraboloid shape and
generally cut short in an annular rim thus structuring the antenna diameter. Several
parabolic reflectors have solid surface discs which is often a reused antenna of a non-
functional television system, thereby making the parabolic antenna to be an economical one.
While for other cases, the circular disc is of mesh type. Although, the use of mesh disc
lowers the gain of the obtained signal on the other hand, altogether decreases the weight of
the antenna structure without rising in the cost. This in turn makes the tracking of the
objects much smoother. In addition to this, the mesh surface permits the flow of air through
the structure while prohibiting water accumulation, producing less torque to the structure
when the antenna system is deployed in a location where the weather conditions are
adverse.
8.6.2 Feed Arrangement Of The Antenna:
The real ‘antenna’ in a parabolic reflector is the feed element that combines the transmission lines or
waveguide carrying the rf energy into the free space. The surface of the parabolic antenna is
completely passive in nature. The feed element is generally positioned at the centre of the parabolic
antenna where the radiation converges at a spot in the antenna and is called the focal point of the
reflectors. If the point source is positioned at the focal point, the reflected rays emerge as a parallel
beam from the parabolic reflectors. Rays coming out parallelly are said to be collimated and
collimation represents the directivity of an antenna even though the emanated beam are not exactly
parallel. As receiver (or transmitter) are deployed at the reflectors focal point and the arrangement is
said to be front fed (as shown in Figure 3.17). In certain intricate arrangements, such as the
Cassegrain type, a secondary reflector called the sub-reflector is employed for directing the energy
towards the reflector surface from the antenna feed which is placed remotely from the main reflector.
The radio-frequency (RF) transmitting or receiving equipme nt is connected to the feed antenna
through a hollow waveguide or transmission line (coaxial cable) (as shown in Figure 8.17 below).
Figure 8.17 : (a) Front Feed Arrangement (b) Cassegrain Feed (source: Balanis, C. A. Antenna theory: analysis and design.
John Wiley & Sons,4th edition)

8.6.3 Focal Length Of Parabolic Reflector:


The distance of focal point from the reflector’s centre gives the focal length and is estimated
using,
2
D (8.35)
f=
16 d
Where, D = reflector diameter; d= reflector depth; f = reflector focal length. The radiation emanating
from the antenna feed give rise to a stream of current on the reflective surface of the parabolic dish
that is re-radiated towards the required direction, vertically to the directrix plane of the paraboloid.
There are various types of antennae feeding systems. But whichever one is utilized, the directivity
exhibited by the feed element must be efficient enough for illuminating the reflector and therefore
should have appropriate polarization depending on the required application. The polarization pattern
of the overall antenna system is determined by feed polarization. The simplest one is the half-wave
dipole for low-frequency application, sometimes combining with the parasitic reflector. However, for
high-frequency applications, horn type antenna is more feasible and efficient.
focal length f
The efficiency of the parabolic reflectors is defined as (η ) = =
dish diameter D
(8.36)
8.6.4 Beamwidth Of The Parabolic Reflectors
When the parabolic antenna gain increases, the beamwidth drops off. In general, the beamwidth of
parabolic reflectors is the point where the power drops to half of the maximum which means a -3 dB
point in the polar plot of the antenna pattern. Hence the beamwidth can be calculated using,
70 λ
Beamwidth ψ= (8.37)
D
When the feed element is deployed at the parabolic reflectors focal point, this results in illuminating
the parabolic antenna. The antenna beamwidth defines how broad the antenna angle will make if it’s
radiating a beam of radio waves. The beamwidth is the antenna characteristics and is the same for
both the transmitting as well as receiving antenna. When designing the parabolic reflectors, proper
illumination of the antenna is required. In other words, the antenna beamwidth should match the f/D
ratio of the reflectors. Likewise, an under illuminated reflector does not make use of its entire surface
area for focusing a signal on its feed element.
8.6.5 Gain Of The Antenna:
Assuming the parabolic reflectors as a circular aperture then the approximate maximum gain
is obtained using,
9.87 D 2
G≈ (8.38)
λ2
Where:

D = reflector diameter; λ = wavelength

9. DATA ANALYSIS;
Analysis of data is a very important task since it is the source of information. It provides the
validation of theories and models as well as their improvements. Data analysis sometimes can give
birth of a new theory or model. But in reality the problem is that any kind of time series data either
from an experiment or from a dynamical system, or from any economic, sociological or biological
aspect, usually contains systemic or manual error. Analysis of such data in presence of noise often
leads to a wrong interpretation of the data. So we need to develop an initial platform by denoising
the data from which we can start the extensive study on it. Filtering a time series data is always an
indispensable task to deal with.
1. MOVING AVERAGE: 3-point or 5-point moving average method:
Most information loss in 3 point and 5-point moving average method [2]. While concentrating on
the process of reducing the noise we are agreed to him that we cannot compromise heavily with
the trend and characteristic of the time series data. But it is violated in the case of 3-point and 5-
point moving average methods. This clearly reveals that we are missing proper information at i-th
place as we are busy with making average of the information at the neighbouring places instead of
giving proper weight to the information at i-th place of the original time series..
The MA filter performs three important functions:
1) It takes   input points, computes the average of those  -points and produces a single output
point
2) Due to the computation/calculations involved, the filter introduces a definite amount of delay
3) The filter acts as a Low Pass Filter (with poor frequency domain response and a good time
domain response).

The difference equation for a  -point discrete-time moving average filter with input represented
by the vector   and the averaged output vector  , is 
L−1
1
y[n]= L ∑ x [n−k ]
k =0
For example, a  -point Moving Average FIR filter takes the current and previous four samples of
input and calculates the average. This operation is represented as shown in the Figure 1 with the
following difference equation for the input output relationship in discrete-time.

y[n]=1/5 {x[n]+x[n−1]+x[n−2]+x[n−3]+x[n−4}
=0.2{x[n]+x[n−1]+x[n−2]+x[n−3]+x[n−4]}

F
igure : Discrete-time 5-point Moving Average FIR filter

ii. KALMAN FILTER: In 1960, R.E. Kalman published his famous papers describing a
recursive solution to the discrete-data linear filtering problem. Since that time, due in large part to
advances in digital computing; the Kalman filter has been the subject of extensive research and
application, particularly in the area of autonomous or assisted navigation. The Kalman filter is a
set of mathematical equations that provides an efficient computational (recursive) means to
estimate the state of a process, in a way that minimizes the mean of the squared error. The filter is
very powerful in several aspects: it supports estimations of past, present, and even future states,
and it can do so even when the precise nature of the modeled system is unknown. Kalman filter
of a time series data {xi}; i=1, 2… n conceptually is analogous to updating a running average :

yi={(i-1)/i}xi-1+(1/i)xi; i=1, 2, …, n.
It should work in far better manner than the 3-point or 5-point moving average method as in the
search of the i-th information this method concentrates only on the (i-1)-th and i-th data of the
original time series by taking their convex linear combination rather than by starting over each
time with simple method of average.
iii. EMD AND HT: Empirical Mode Decomposition is a method of decomposition of complex
multicomponent signal x (t) into a series of Lsignals hi ( t ) , 1<i< Lknown as Intrinsic Mode
Functions (IMF) in d decreasing order of frequencies. Mathematically:
L
y ( t ) =∑ hi (t)+ d ( t )
i=1
Where d ( t ) is the residue. Each one of the IMFs, say the i thonehi ( t ), is estimated with an
iterative process, known as sifting, applied to the residual multicomponent signal.
The Intrinsic Mode Functions are defined by two statements:
1. In the whole set of data the number of local maxima or minima must be equal to the number of
zero crossing or differ at most by one.
2. At any point of time, the mean value of the “upper envelope” (defined by the local Maxima) and
the “lower envelope” (defined by the local Minima) must be zero.
T
SD=∑ ¿ ¿ ¿ ¿.
t=0

So the calculation yields the iteration for finding the IMFs must be stopped when 0.2< SD<0.3 .
iv. WAVELET ANALYSIS: This section describes the method of wavelet analysis, includes a
discussion of different wavelet functions, and gives details for the analysis of the wavelet power
spectrum. Results in this section are adapted to discrete notation from the continuous formulas
given in Daubechies, Weng and Lau , and Meyers et al. (1993). The noise-reduction algorithm
consists of the following steps
Step 1: In the first step, we differentiate the noisy signal x(t) to obtain the data xd(t), using the
central finite difference method with fourth order correction to minimize the error ] i.e.

xd(t)=dx(t)/dt
Step 2: We then take the Discrete Wavelet Transform of the data xd(t) and obtain wavelet

coefficients Wj,k at various dyadic scales j and displacements k. A dyadic scale is the scale
whose numerical magnitude is equal to 2 rose to an integer exponent and is labelled by the
exponent. Thus the dyadic scale j refers to a scale of size 2j. In other words, it indicates a
resolution of 2j data points. Thus a low value of j implies a finer resolution, while high j analyzes
the signal at a larger resolution. This transform is the discrete analogue of continuous Wavelet
Transform [14, 19, 20] and it can be represented by the following formula

With
Where j, k are integers. As for the wavelet function ψ(t), we have chosen Daubechies’compactly
supported orthogonal function with four filter coefficients [14]. Here for simplicity we take

In this step we estimate the power Pj contained in different dyadic scales j, via

By plotting the variation of Pj with j, we see that it is possible to identify a scale jm at which the
power due to noise falls of rapidly. This is important because, as we shall see from the studies on
the case examples, it provides an automated means of detecting the threshold. Identification of the
scale jm at which the power due to noise shows the first minimum allows us to reset all Wj,k up to
scale index jm to zero, that is Wj,k=0, for j=1,2,…., jm
Step 4: In the fourth step, we reconstruct the denoised data ^x d(t) by taking the inverse transform
of the coefficients Wj,k

^
This set of obtained ^x d(t) gives measure of time variation in the signal. Upon differentiation the
contribution due to white noise moves towards the finer scales because the process of
differentiation converts the uncorrelated stochastic process to a first order moving average process
and thereby distributes more energy to finer scales. Finally we plot the graph of ^x d(t) vs. t to
obtain the corresponding peaks.

v. SAVIZKY GOLAY FILTER:


In their “seminal” [21] paper, Savitzky and Golay proposed a method of data smoothing
based on local least-squares polynomial approximation. They showed that fitting a polynomial to a set
of input samples and then evaluating the resulting polynomial at a single point within the
approximation interval is equivalent to discrete convolution with a fixed impulse response. The basic
idea behind least-squares polynomial smoothing is depicted in Figure 1, which shows a sequence of
samples x(n) of a signal as solid dots. Considering for the moment the group of 2M+ 1 samples
centered at n = 0, we obtain (by a process to be described) the coefficients of a polynomial-
n
p ( n )=∑ a k n k … … … … … … … … … ..(1)
k=o
That minimize the mean-squared approximation error for the group of input samples centered on n =
0,
m


2
y ( n )= ( p ( n )−x [ n ] )
n =−m

( )
m n 2

¿ ∑ ∑ ak n −x [ n ] k
…………………(2)
n=−m k=o

The analysis is the same for any other group of 2M + 1 input samples. We shall refer to m as
the “half width” of the approximation interval. the smoothed output value is obtained by evaluating
p(n) at the central point n = 0. That is, y(0), the output at n= 0, is
y [0]= p(0)=a 0
that the output value is just equal to the 0th polynomial coefficient.
This leads to nonlinear phase filters, which can be useful for smoothing at the ends of finite-
length input sequences. The output at the next sample is obtained by shifting the analysis interval to
the right by one sample, redefining the origin to be the position of the middle sample of the new block
of 2M+1 samples, and repeating the polynomial fitting and evaluation at the central location. This
can be repeated at each sample of the input, each time producing a new polynomial and a new value
of the output sequence y[n]. Savitzky and Golay showed that at each position, the smoothed output
value obtained by sampling the fitted polynomial is identical to a fixed linear combination of the local
set of input samples; i.e., the set of 2M + 1 input samples within the approximation interval are
effectively combined by a fixed set of weighting coefficients that can be computed once for a given
polynomial order N and approximation interval of length 2M+1. That is, the output samples can be
computed by a discrete convolution of the form.
m
y [0]= ∑ h [ m ] x [n−m]
n=−m

n +M

= ∑ h [ n−m ] x [m]
m=n− M

Anyway we have applied the model on both of the entire unfiltereded data series of the solar
irradiance and Forbush Decrease in the present case and have plotted the filtered data in figure 1b
and 2b. Now these smoothed data series are considered for a new nonparametric approach to estimate
Granger causality directly.

vi. SIMPLE EXPONENTIAL SMOOTHING: the prescribed model for a time series data

{xi}; i=1, 2, …., n , y1=x1 and yi=αxi + (1-α) yi-1; i=2, 3, …., n
where, yi is the smoothed data at the i-th position and a (0< a< 1) is
a parameter. This is analogous to
y1=x1
and yi=αxi+α(1-α)xi-1+α(1-α)2xi-2+….+a(1-a)i-2x2+(1-a)i-1x1
for i=2, 3, …, n
where the sum of the corresponding weights a, a(1-a), a(1-a)2, …., a(1-a)i-2 and (1-a)i-1 is
equal to unity. Thus in effect, each smoothed value is a convex linear combination of all the
previous observations as well as the current observation. We believe that denoising a time series
data can be performed with a satisfactory level of accuracy if we concentrate on the following two
matters:
i) Certain cases may arise in which the error generated in a certain position propagates to the next
stages. In those cases while trying to develop a smoothing model we must consider this
propagation of error and must try to fight against it.
ii) While extracting the new time series data by filtering the old one we must keep in mind the
positional importance of data i.e. if {yi} be the newly developed time series data by filtering the
old one {xi}; i=1, 2, …., n the yi’s must be generated mostly from the corresponding xi’s. In the
case of Kalman [3,4] filter the positional importance is not maintained since in the expression of i-
th filtered data more weightage is given in xi-1 than in xi. Keeping the above two points in mind in
this study we make an effort to apply Simple exponential smoothing technique as it can more
reliable to reduce the noise mixed in the astronomical dataset . But still here the contradiction
remains about the value of the co efficient in the polynomial. We probably made it clear in our
earlier discussion [ ] that the value of α should be near by 0.5 or o.511 as it provides best result[it
was shown below] without hampering positional importance of the data in sense of filtration.
. Thus we can certainly accept the Simple Exponential Smoothing and Savitzky Golay Filter as a
favourable example of our General Adaptive Rule of Filtering .In absence of the propagation of
error , the percentage reduction of the total error pre

Memory flow in the Processes


(Persistency test for the Processes)

Finite Variance Scaling Analysis: A well known version of Finite Variance Scaling Method
(FVSM) is the Standard Deviation Analysis (SDA) [1, 2], which is based on the evaluation of the
standard deviation D(t) of the variable x(t).
[ { }]
j j 2 1
2
∑ x (t ) 2
∑ x (t i) for j = 1,2…n. Eventually it is observed [1,2,3] The Hurst
i=1 i=1
D ( t j )= −
j j
exponent occurs in several areas of applied mathematics, including fractals and chaos theory, long
memory processes and spectral analysis. Hurst exponent estimation has been applied in areas
ranging from astrophysics to computer networking. It applies to data sets that are statistically self-
similar. Statistically self-similar means that the statistical properties for the entire data set are the
same for sub-sections of the data set. Estimation of the Hurst exponent was originally developed
in hydrology. In fact, the estimation also provides for a time series data set provides a measure of
whether the data is a pure random walk or has underlying trends. That process with an underlying
trend defines it’s own randomness with some degree of autocorrelation. When the autocorrelation
has a very long decay, then the process is sometimes referred to as a long memory process
similarly the vice versa is true for Short memory process . It has been found that sometimes
processes which exhibits Hurst exponent values for long memory processes are purely random
The astronomical phenomologies like Total Solar Irradiance (TSI) variation and Forbush
decrease (FD) indices are very important for the understanding of solar internal structure and the
solar terrestrial relationships. Hurst et al., in their pioneering work introduced the notion of
H
rescaled range analysis of a time series that takes the scaling form of D ( t ) ∝t (H is now
called the Hurst exponent). This stimulated Mandelbrot to introduce the concept of
fractional Brownian motion (FBM) [5]. In a random walk context the value H >0:5 indicates
uncorrelated noise, 0 < H < 0:5 indicates antipersistent noise, and 0:5 < H <
1 indicates persistent or long-range correlated noise [4]. Alternative scaling

methods applied to a time series , where P ( k )=Ck −∝ , where H=1− 2 focus
on the autocorrelation function on the power spectrum
representation PS and on the evaluation of the variance of the
diffusion generated by All such scaling methods are related to the
original Hurst’s analysis and yieldhis H exponent for sufficiently long time series. All such
scaling methods are related to the original Hurst’s analysis and yield this Hurst exponent for
sufficiently long time series. The value of the Hurst exponent ranges between 0 and 1. A value of
0.5 indicates a true random walk (a Brownian time series). In a random walk there is no correlation
between any element and future element. A Hurst exponent value 0<H<0.5 will exist for a time
series with anti-persistent behaviour (negative autocorrelation). If the Hurst exponent is
0.5<H<1.0, the process will be a long memory process. A Hurst exponent value in this range
indicates persistent behaviour (or, a positive autocorrelation) and therefore the time series will
certainly follow a trend. We do have a conventional method of run test in data mining to check
whether two given time series data are random or not. But this FVSM is more useful in this
context since it not only provides the answer of the question of randomness in the data but also
provides the measure of persistence or anti-persistence in it[ 8,9,10].
Demonstration 2.2.a.i : Application of FVSM on TSI ; Graphical representation ; Conclusion

Results: This plot is fitted with equation (4) yielding H =0.1158. As the estimated value of H in
the present case is less than 0.5 it can be suggested that the present data is anti-persistent in
behavior (i.e. negatively auto correlated) . Further Fractal dimension analysis is needed to be
checked for confirmation
Demonstration 2.2.a.ii : Application of FVSM on FDi ; Graphical representation ; Results

Result:the plot of log of SD against log of time calculated from the present filtered Forbush
Decreases indices. This plot is fitted with yielding H =0.15. As the estimated values of H in the
present case is less than 0.5 it can be suggested that the present data is anti-persistent in behaviour
and the process is a short memory process. Further confirmation is required.

FRACTAL GEOMETRY:
Fractality Analysis: Fractal method has been studied to understand the irregular and chaotic nature
of graphical structure. Since fractals were introduced in physics, their applications promoted enormous
progress in understanding phenomena that are most directly involved in formation of irregular
structures. A broad class of clustering phenomena such as filtration, electrolysis and aggregation of
colloids and aerosols has received a good deal of attention. Other phenomena that are not strictly
clustering effects (i.e., dielectric breakdown, formation of a contact surface when two liquids are mixed
etc.) can be advantageously treated using fractals. Describing natural objects by geometry is as old as
science itself; traditionally this has involved the use of Euclidean lines, rectangles, cuboids, spheres and
so on. But, nature is not restricted to Euclidean shapes. More than twenty years ago Benoit B.
Mandelbrot observed that “clouds are not spheres, mountains are not cones, coastlines are not circles,
bark is not smooth, nor does lightning travel in a straight line”. Most of the natural objects we see
around us are so complex in shape to deserve being called geometrically chaotic. They appear
impossible to describe mathematically and used to be regarded as the “monsters of mathematics”. In
1975, Mandelbrot introduced the concept of fractal geometry to characterize these monsters
quantitatively and to help us to appreciate their underlying regularity. The simplest way to construct a
fractal is to repeat a given operation over and over again deterministically. The classical Cantor set is a
simple text book example of such a fractal. It is created by dividing a line into n equal pieces and
removing (nm) of the parts created and repeating the process with m remaining pieces ad infinitum.
However, fractals that occur in nature occur through continuous kinetic or random processes. Having
realized this simple law of nature, we can imagine selecting a line randomly at a given rate, and
dividing it randomly, for example. We can further tune the model to determine how random this
randomness is. Starting with an infinitely long line we obtain an infinite number of points whose
separations are determined by the initial line and the degree of randomness with which intervals were
selected. The properties of these points appear to be statistically self-similar and characterized by the
fractal dimension, which is found to increase with the degree of increasing order and reaches its
maximum value in the perfectly ordered pattern. It is now accepted that when the power spectrum of an

irregular time series is expressed by a single power law F , the time series shows a property of a

fractal curve. As the fractal length L(k) of the time series is expressed as L(k)  kD where k is the
time interval, the fractal dimension D is expected to be closely related to the power law index . The
relation between  and D has been investigated by Higuchi and it is given by D = (5)/2.
We can determine the randomness of a time series by determining fractal dimensions and from this we
can conclude whether a physical structure is chaotic in nature or not For any physical structure D lies
between 1 and 2. For the ideal case of chaotic physical structure D is 5/3, which describes
inertial range turbulence in an incompressible fluid. If the value of D lies between 1 and 5/3 the time
series is expected to follow some trend. Again if the range for D is in between 5/3 and 2 it is
usually not obeying any trend. .

You might also like