Professional Documents
Culture Documents
Instrumentation For Space Technology FINAL
Instrumentation For Space Technology FINAL
Instrumentation For Space Technology FINAL
In recent years, the IAU formalized several names for stars amid calls from the astronomical community to
include the public in their naming process. The IAU formalized 14 star names in the 2015 "Name ExoWorlds"
contest, taking suggestions from science and astronomy clubs around the world. Then in 2016, the IAU
approved 227 star names, mostly taking cues from antiquity in making its decision. The goal was to reduce
variations in star names and also spelling ("Formalhaut", for example, had 30 recorded variations.) However, the
long-standing name "Alpha Centauri" – referring to a famous star
A star develops from a giant, slowly rotating cloud that is made up entirely or almost entirely of hydrogen and
helium. Due to its own gravitational pull, the cloud behind to collapse inward, and as it shrinks, it spins more
and more quickly, with the outer parts becoming a disk while the innermost parts become a roughly spherical
clump.
According to NASA, this collapsing material grows hotter and denser, forming a ball-shaped protostar. When
the heat and pressure in the protostar reaches about 1.8 million degrees Fahrenheit (1 million degrees Celsius),
atomic nuclei that normally repel each other start fusing together, and the star ignites. Nuclear fusion converts a
small amount of the mass of these atoms into extraordinary amounts of energy — for instance, 1 gram of mass
converted entirely to energy would be equal to an explosion of roughly 22,000 tons of TNT.
2. STELLER EVALUATION
The life cycles of stars follow patterns based mostly on their initial mass. These include intermediate-mass stars
such as the sun, with half to eight times the mass of the sun, high-mass stars that are more than eight solar
masses, and low-mass stars a tenth to half a solar mass in size. The greater a star's mass, the shorter its lifespan
generally is, according to NASA. Objects smaller than a tenth of a solar mass do not have enough gravitational
pull to ignite nuclear fusion — some might become failed stars known as brown dwarfs.
An intermediate-mass star begins with a cloud that takes about
100,000 years to collapse into a protostar with a surface
temperature of about 6,750 F (3,725 C). After hydrogen fusion
starts, the result is a T-Tauri star, a variable star that fluctuates in
brightness. This star continues to collapse for roughly 10 million
years until its expansion due to energy generated by nuclear fusion
is balanced by its contraction from gravity, after which point it
becomes a main-sequence star that gets all its energy from
hydrogen fusion in its core.
The greater the mass of such a star, the more quickly it will use its
hydrogen fuel and the shorter it stays on the main sequence. After all the hydrogen in the core is fused into
helium, the star changes rapidly — without nuclear radiation to resist it, gravity immediately crushes matter
down into the star's core, quickly heating the star. This causes the star's outer layers to expand enormously and
to cool and glow red as they do so, rendering the star a red giant. Helium starts fusing together in the core, and
once the helium is gone, the core contracts and becomes hotter, once more expanding the star but making it
bluer and brighter than before, blowing away its outermost layers.
After the expanding shells of gas fade, the remaining core is left,
a white dwarf that consists mostly of carbon and oxygen with an
initial temperature of roughly 180,000 degrees F (100,000
degrees C). Since white dwarves have no fuel left for fusion, they
grow cooler and cooler over billions of years to become black
dwarves too faint to detect. Our sun should leave the main
sequence in about 5 billion years, according to Live Science.
A high-mass star forms and dies quickly. These stars form from protostars in just 10,000 to 100,000 years.
While on the main sequence, they are hot and blue,
some 1,000 to 1 million times as luminous as the sun
and are roughly 10 times wider. When they leave the
main sequence, they become a bright red supergiant,
and eventually become hot enough to fuse carbon into
heavier elements. After some
10,000 years of such fusion,
the result is an iron core
roughly 3,800 miles wide
(6,000 km), and since any
more fusion would consume
energy instead of liberating
it, the star is doomed, as its
nuclear radiation can no
longer resist the force of
gravity.When a star reaches a
mass of more than 1.4 solar
masses, electron pressure
cannot support the core
against further collapse, according to NASA. The result is a supernova. Gravity causes the core to collapse,
making the core temperature rise to nearly 18 billion degrees F (10 billion degrees C), breaking the iron down
into neutrons and neutrinos. In about one second, the core shrinks to about six miles (10 km) wide and rebounds
just like a rubber ball that has been squeezed, sending a shock wave through the star that causes fusion to occur
in the outlying layers. The star then explodes in a so-called Type II supernova. If the remaining stellar core was
less than roughly three solar masses large, it becomes a neutron star made up nearly entirely of neutrons, and
rotating neutron stars that beam out detectable radio
pulses are known as pulsars. If the stellar core was
larger than about three solar masses, no known force
can support it against its own gravitational pull, and it
collapses to form a black hole.
A low-mass star uses hydrogen fuel so sluggishly that
they can shine as main-sequence stars for 100 billion to
1 trillion years — since the universe is only about 13.7
billion years old, according to NASA, this means no
low-mass star has ever died. Still, astronomers
calculate these stars, known as red dwarfs, will never
fuse anything but hydrogen, which means they will
never become red giants. Instead, they should
eventually just cool to become white dwarfs and then
black dwarves.
Although our solar system only has one star, most stars like our sun are not solitary, but are binaries, where two
stars, or multiple stars orbit each other. In fact, just one-third of stars like our sun are single, while two-thirds are
multiples — for instance, the closest neighbor to our solar system, Proxima Centauri, is part of a multiple
system that also includes Alpha Centauri A and Alpha Centauri B. Still, class G stars like our sun only make up
some 7 percent of all stars we see — when it comes to systems in general, about 30 percent in our galaxy are
multiple, while the rest are single, according to Charles J. Lada of the Harvard-Smithsonian Center for
Astrophysics.
Binary stars develop when two protostars form near each other. One member of this pair can influence its
companion if they are close enough together, stripping away matter in a process called mass transfer. If one of
the members is a giant star that leaves behind a neutron star or a black hole, an X-ray binary can form, where
matter pulled from the stellar remnant's companion can get extremely hot — more than 1 million F (555,500 C)
and emit X-rays. If a binary includes a white dwarf, gas pulled from a companion onto the white dwarf's surface
can fuse violently in a flash called a nova. At times, enough gas builds up for the dwarf to collapse, leading its
carbon to fuse nearly instantly and the dwarf to explode in a Type I supernova, which can outshine a galaxy for
a few months.
Proxima Centauri has about an eighth of the mass of the sun. Faint red Proxima Centauri - at only 3,100 degrees
K (5,120 F) and 500 times less bright than our sun - is nearly a fifth of a light-year from Alpha Centauri A and
B. That raises some question about whether it is gravitationally bound to Alpha Centauri A and B.
KEY CHARACTERISTICS
Brightness
Astronomers describe star brightness in terms of magnitude and luminosity. The magnitude of a star is based on
a scale more than 2,000 years old, devised by Greek astronomer Hipparchus around 125 BC, according to
NASA. He numbered groups of stars based on their brightness as seen from Earth — the brightest stars were
called first magnitude stars, the next brightest were second magnitude, and so on up to sixth magnitude, the
faintest visible ones. Nowadays astronomers refer to a star's brightness as viewed from Earth as its apparent
magnitude, but since the distance between Earth and the star can affect the light one sees from it, they now also
describe the actual brightness of a star using the term absolute magnitude, which is defined by what its apparent
magnitude would be if it were 10 parsecs or 32.6 light years from Earth. The magnitude scale now runs to more
than six and less than one, even descending into negative numbers — the brightest star in the night sky is Sirius,
with an apparent magnitude of -1.46.
Luminosity is the power of a star — the rate at which it emits energy. Although power is generally measured in
watts — for instance, the sun's luminosity is 400 trillion trillion watts— the luminosity of a star is usually
measured in terms of the luminosity of the sun. For example, Alpha Centauri A is about 1.3 times as luminous
as the sun. To figure out luminosity from absolute magnitude, one must calculate that a difference of five on the
absolute magnitude scale is equivalent to a
factor of 100 on the luminosity scale — for
instance, a star with an absolute magnitude of 1
is 100 times as luminous as a star with an
absolute magnitude of 6. The brightness of a
star depends on its surface temperature and
size.
Colour
Stars come in a range of colours, from reddish
to yellowish to blue. The colour of a star
depends on surface temperature. A star might
appear to have a single colour, but actually
emits a broad spectrum of colors, potentially
including everything from radio waves and
infrared rays to ultraviolet beams and gamma
rays. Different elements or compounds absorb and emit different colours or wavelengths of light, and by
studying a star's spectrum, one can divine what its composition might be.
Surface temperature
Astronomers measure star temperatures in a unit known as the kelvin, with a temperature of zero K ("absolute
zero") equaling minus 273.15 degrees C, or minus 459.67 degrees F. A dark red star has a surface temperature
of about 2,500 K (2,225 C and 4,040 F); a bright red star, about 3,500 K (3,225 C and 5,840 F); the sun and
other yellow stars, about 5,500 K (5,225 C and 9,440 F); a blue star, about 10,000 K (9,725 C and 17,540 F) to
50,000 K (49,725 C and 89,540 F). The surface temperature of a star depends in part on its mass and affects its
brightness and color. Specifically, the luminosity of a star is proportional to temperature to the fourth power. For
instance, if two stars are the same size but one is twice as hot as the other in kelvin, the former would be 16
times as luminous as the latter.
Size
Astronomers generally measure the size of stars
in terms of the radius of our sun. For instance,
Alpha Centauri A has a radius of 1.05 solar radii
(the plural of radius). Stars range in size from
neutron stars, which can be only 12 miles (20
kilometres) wide, to supergiants roughly 1,000
times, the diameter of the sun. The size of a star
affects its brightness. Specifically, luminosity is
proportional to the radius squared. For instance,
if two stars had the same temperature, if one star
was twice as wide as the other one, the former
would be four times as bright as the latter.
Mass
Astronomers represent the mass of a star in
terms of the solar mass, the mass of our sun. For
instance, Alpha Centauri A is 1.08 solar masses.
Stars with similar masses might not be similar
in size because they have different densities. For
instance, Sirius B is roughly the same mass as the sun but is 90,000 times as dense, and so is only a fiftieth its
diameter. The mass of a star affects surface temperature.
Magnetic field
Stars are spinning balls of roiling, electrically charged gas, and thus typically generate magnetic fields. When it
comes to the sun, researchers have discovered its magnetic field can become highly concentrated in small areas,
creating features ranging from sunspots to spectacular eruptions known as flares and coronal mass ejections. A
survey at the Harvard-Smithsonian Center for Astrophysics found that the average stellar magnetic field
increases with the star's rate of rotation and decreases as the star ages.
Metallicity
The metallicity of a star measures the amount of "metals" it has — that is, any element heavier than helium.
Three generations of stars may exist based on metallicity. Astronomers have not yet discovered any of what
should be the oldest generation, Population III stars born in a universe without "metals." When these stars died,
they released heavy elements into the cosmos, which Population II stars incorporated relatively small amounts
of. When a number of these died, they released more heavy elements, and the youngest Population I stars like
our sun contain the largest amounts of heavy elements.
The structure of a star can often be thought of as a series of thin nested shells, somewhat like an onion. A star
during most of its life is a main-sequence star, which consists of a core, radiative and convective zones, a
photosphere, a chromosphere and a corona. The core is where all the nuclear fusion takes places to power a star.
In the radiative zone, energy from these reactions is transported outward by radiation, like heat from a light
bulb, while in the convective zone, energy is transported by the roiling hot gases, like hot air from a hairdryer.
Massive stars that are more than several times the mass of the sun are convective in their cores and radiative in
their outer layers, while stars comparable to the sun or less in mass are radiative in their cores and convective in
their outer layers. Intermediate-mass stars of spectral type A may be radiative throughout.
to explain this condition, temperature ,mass, density and the composition must be considered together and hence
they are used to express as an equation of state , this can be started from the gas law
ρTR
P= ,
μ
T is the abolute temperature , R is constant , μ is the Mean Molecular weight
2nd assumption made on the energy transfers methods from star like the Sun. The method of energy
transfer from the sun was considered as radiation, convection, conduction and neutrino loss. Major part of the
energy transferred actually is considered by Radiation, and convection method. It is measured as the temperature
gradient of a process:
dT kρL
=−0.1875
dr πac r 2 T 3
Where k is an opacity of stellar model or a function of the composition of gases and of its physical
conditions required detailed knowledge of all the processes important for radiative flux (elastic and inelastic
scattering, absorption and emission, inverse beams straight lump) and in turn a detailed evolution of the atomic
level in the solar interior, and L is defined as Luminosity or Electromagnetic energy flux. The above equation
represents the radiation method of heat transfer, whereas Heat transfer through the convection method is
dT
dr ( )( )
1 T
= 1− .
dP
.( )
γ P dr
Point source S is radiating light equally in all directions. The amount passing through an area A varies with the
distance of the surface from the light.
The Stefan–Boltzmann equation applied to a black body gives the value for luminosity for a black body, an
idealized object which is perfectly opaque and non-reflecting:
F=L/( π r 2)
( )
3.5
L M
=
L¿ M ¿
If we define M as the mass of the star in terms of solar masses, the above relationship can be simplified as
follows: L=M 3.5
Luminosity is an intrinsic measurable property of a
star independent of distance. The concept of
magnitude, on the other hand, incorporates
distance. The apparent magnitude is a measure of
the diminishing flux of light as a result of distance
according to the inverse-square law. The Pogson
logarithmic scale is used to measure both apparent
and absolute magnitudes, the latter corresponding to
the brightness of a star or other celestial body as
seen if it would be located at an interstellar
distance of 10 parsecs (3.1×1017 metres). In addition to this brightness decrease from
increased distance, there is an extra decrease of brightness due to extinction from intervening
interstellar dust.
By measuring the width of certain absorption lines in the stellar spectrum, it is often possible
to assign a certain luminosity class to a star without knowing its distance. Thus a fair measure
In the magnitude handout, we distinguished between two different magnitudes: the apparent magnitude,
which indicates how bring an object appears to be, and absolute magnitude, which indicates a star’s true
brightness, or luminosity. The only reason those two numbers are different for various stars is because
every star is not the same distance from us. We can take advantage of this by using the difference between
a star’s apparent magnitude and absolute magnitude to actually calculate the distance of the star. This
difference is called the distance modulus, m – M.
Recall that apparent magnitude is a measure of how bright a star appears from Earth, at its “true
distance,” which we call D. Absolute magnitude is the magnitude the star would have if it were at a
standard distance of 10 parsecs away. So this presents us with three general possibilities for the value of
the distance modulus:
If the star is exactly 10 parsecs away (rare, but it does happen), the absolute
magnitude will be the same as the apparent magnitude. The apparent magnitude is actually a
good indicator of true luminosity. Thus, if m – M = 0, then the distance D = 10 pc.
If the star is closer than 10 parsecs, then the star will appear deceptively bright; its
apparent magnitude will be too bright to tell us its true luminosity. The star looks brighter
than it actually is. Remember that the magnitude system is “backwards,” in that lower
numbers mean brighter stars. Therefore, in the case where the star is closer than 10 parsecs,
the apparent magnitude will be a lower number (brighter) than the absolute magnitude, and m
– M will be a negative number. So if m – M < 0, then the distance D < 10 pc.
If the star is farther than 10 parsecs, then the star will appear deceptively dim; its
of its absolute magnitude can be determined without knowing neither its distance nor the
interstellar extinction.
In measuring star brightness’s absolute magnitude, apparent magnitude, and distance are
interrelated parameters—if two are known, the third can be determined. Since the Sun's
luminosity is the standard, comparing these parameters with the Sun's apparent magnitude
and distance is the easiest way to remember how to convert between them, although
officially, zero-point values are defined by the IAU.
The magnitude of a star, a unitless measure, is a logarithmic scale of observed visible
brightness. The apparent magnitude is the observed visible brightness from Earth which
depends on the distance of the object. The absolute magnitude is the apparent magnitude at a
distance of 10 pc (3.1×1017 m), therefore the bolometric absolute magnitude is a logarithmic
measure of the bolometric luminosity.
The difference in bolometric magnitude between two objects is related to their luminosity
ratio according to:[19]
L1
M BOL 1−M BOL 2=
L2
where:
● M BOL1 is the bolometric magnitude of the first object
● M BOL2 is the bolometric magnitude of the second object.
● L1 is the first object's bolometric luminosity
● L2 is the second object's bolometric luminosity
The zero point of the absolute magnitude scale is actually defined as a fixed luminosity
of 3.0128×1028 W. Therefore, the absolute magnitude can be calculated from luminosity in
watts:
and the luminosity in watts can be calculated from an absolute magnitude (although absolute
magnitudes are often not measured relative to an absolute flux): A measure of the
luminosity in star created by these reactions is given by
dL
dr (
=4 π r 2 ρ ε+ T
dS
dt )
, Where ε is the rate of energy production. Ergs.g.sec-1 ,
Absolute Magnitude: the apparent magnitude that a star would have if it were, in our imagination, placed at a
distance of 10 parsecs or 32.6 light years from the Earth.
From the definitions for absolute magnitude M and apparent magnitude m, and some algebra, m and M are related
by the logarithmic equation
which permits us to calculate the absolute magnitude from the apparent magnitude and the distance. This equation
can be rewritten as
d(pc) = 10(m - M + 5) / 5
The quantity m - M is called the distance modulus, since it is a measure of how distant the star is.
S is the entropy per unit mass. The T dS/dt term ("entropy term") is the sole term in the basic
stellar evolution equations that includes time explicitly.
The final assumption of the standard solar model is that the sun was initially of
homogeneous, primordial composition, and highly convective at its main sequence turn on.
Since heavy elements are neither created nor destroyed in the thermonuclear reactions in a
solar-type star, they provide a record of the initial abundances, and only the relative amounts
of hydrogen and 4helium are an indicator of stellar evolution.
For the understanding of physical and chemical structures, the theory of stellar
structure evolution enunciates that the condition for the hydrostatic equilibrium jointly with
the conservation of energy and the mechanism for energy transport determines the physical
structure of a star-like Sun. The above four assumptions or inputs can be substantial as
● The equation of state for stellar matter
● The radiative opacity (k) as a function of density (ρ) , temperature (T), and chemical
composition
● The energy production per unit mass and time, again as a function of ρ, T, and
chemical changes
● Initial chemical composition
From one of the assumption it is needed to point out that the radiative opacity k is
directly connected with the photon means free path, A=1/kρ . All throughout the internal
radiative region k governs the temperature gradient through the well-known relations. With
regards to the fourth assumption, denoting
the mass abundance of H, He and heavier
elements respectively X, Y Z (whereas
X+Y+Z= 1) [] . The present ratio (Z/X) of
heavy elements (metallic) to the hydrogen
in the atmosphere is (Z/X)photo = 0.0245 (1±0.061) which was derived with meteoritic analysis
and solar spectroscopy .
If complete information about the initial photospheric composition was available and
the theory of stellar models was capable to predict the solar radii firmly, then there would be
no free parameter. The investigation model should account for the solar luminosity and radius
at the solar edge without disturbing any parameters. The standard solar model must have the
Sun's luminosity and radius at the Sun's age. The accuracy of the mass determination depends
directly on the determination of G. The mass of the Sun is 1.9891E33 g with a relative
uncertainty of ±0.02% . The radius of the
Sun for stellar structure calculations is
defined at an optical depth of tau = 2/3, and
is known, through transit and eclipse
measurements to be 6.96E10 cm at tau =
0.001 with an error of ±0.01%. Here, as
noted by Ulrich and Rhodes, it is important
to translate the glancing angle measurements
at the Sun's limb to determine the Sun's
radius to a tau = 2/3 optical depth measured perpendicular to the Sun's surface. The
luminosity is determined from solar constant measurements from space. This luminosity of
the sun (Lo) depends in a rather sensitive way on the initial helium abundance Y and the
metal abundance Z. Since the ratio (Z/X) is constrained by observational data as Y & Z can
be chosen independently, if Y increases, Z must decrease. ERB-Nimbus 7 measures solar
irradiance of 1371.0±0.765 W/m² and SMM/ACRIM measures. The luminosity of the Sun is
3.846E33 erg/s from SMM/ACRIM and 3.857E33 erg/s from ERB-Nimbus 7. Taking the
average of the two and setting the error equal to the difference between them, one obtains Lsun
= 3.8515±0.011E33 erg/s. Here it is well known that the mass of The sun M0 =
(1.98892±0.00025)×1033 gm and radius R0= (6.9598 ±0.0007) ×1010cm luminosity (L0) i.e.
produced by the sun is =4.844(1 ± 0.001)× 1033 erg sec-1.
The age of the Sun is inferred from the ages of the oldest meteorites, although, the
age is commonly quoted as being 4.6 Gyr or 4.7 Gyr. Guenther notes that the latest
determinations of the ages of the oldest meteorites sets the age of meteoritic condensation at
4.56 Gyr which revises his earlier age estimate. The new best estimate of the Sun's age is now
4.52±0.04. So, in order to produce a standard solar model, one must study the evolution of a
homogeneous solar mass upto the solar age. To get the measurement of the Radius, the
efficiency of convection must be conserved as it dominates the energy transport in the outer
layer of the sun.
The process “mixing length theory” explains that if the mixing length l as the
distance over which a moving unit of gas can be identified before it mixes appreciably. This
1
length l is related to the pressure scale being Hp= through l =α Hp, Where α is
( d ln ln P/dr )
independent radial coordinate and it is used as free parameter. By varying α, mixing length
can be measured. Thus solar radii were determined by efficiency of convection. If α is
increased convection becomes more efficient, temperature gradient smaller and the surface
temperature higher. Above description establishes that a standard model has three essential
parameters α, Y, (Z/X) .
Since, many studies have been dedicated to finding the volatility trends of the condensing
elements, which are expressed by the condensation temperature of an element and its
compounds. More recent studies have become quite detailed in investigating the
condensation of major rock-forming elements under various potential nebular conditions,
such as different dust-to-gas ratios.
The abundances of most of the elements can be directly determined from the photospheric
spectrum of the Sun. It is assumed that these abundances are identical to the abundance of the
elements in the zero age models. Neon and argon abundances are adopted from their
measured abundances in the solar corona, solar winds and nebula (Meyer 1979). Helium, as
the second most abundant element in the Sun, is left as a free parameter of the standard solar
model. The abundance of helium is adjusted to produce a solar model with the Sun's
luminosity. Recent results from standard solar models considering helium and heavy-
element settling imply on to protosolar abundances in where, considering settling effects one
can derive protosolar mass fractions are X0
¼
— 0:7110, Y0 ¼
— 0:2741, and Z0 ¼
—
0:0149.
New values for C, N, O, Ne and Ar
abundances calculated using three-
dimensional rather than one-dimensional atmospheric models, including hydrodynamical
effects, and uncertainties in atomic data and observational spectra. New estimates of the
abundance, together with the previous best estimates for other solar surface abundances ,
incurred a ratio of heavy elements to hydrogen by mass of Z/X = 0.0176, much less than the
previous
value of Z/X
= 0.0229 .
The
most
problematic
and
important
source of
uncertainties
belongs to
the surface
composition
of the Sun.
Systematic
errors
dominate
the effects
of line
blending,
departures from local thermodynamic equilibrium and details of the model of the solar
atmosphere. But it was assumed in the calculation that the uncertainty in all-important
element abundances is approximately identical.
Since the significance of a disagreement with the Standard Solar model is of great
importance. Unsolved solar neutrino problem has given birth to the series of ad hoc
“nonstandard” solar models by changing the solar model in which profound care was
considered in order to lower the calculated rate of the 8-B neutrino flu Over the past two
decades, the most often hypothesized change is some form of mixing of the solar material
that reduces the central temperature and therefore the important 8B neutrino flux. Previous
arguments that extensive mixing does not occur are theoretical, including the fact that the
required energy is 5 orders of magnitude larger than the total present rotational energy. Thus,
these scientists were able to establish a new nonstandard solar Model adjusting the free
parameter to reduce the calculated 7Be flux more than the 8B flux. The calculated neutrino
fluxes depend upon the central temperature of the solar model approximately as a power
(which varies from almost 1.1 for the p-p neutrinos to 24, for the 8B neutrinos) of the
temperature. Similar temperature scaling is found for nonstandard solar models. Within three
years 2001 to 2003, scientists solved a mystery with which they had been struggling for four
decades. The solution turned out to be important for both physics and for astronomy. During
the first half of twelfth century, the physicist believed that conversion from hydrogen to
helium is the reason of solar luminosity.
This conversion process can be written schematically as
4 p → 4 He +2 e +¿+2ϑ e ¿
They explain their theory as the four hydrogen ion or proton form the nucleus of a helium
atom (He), two positive electrons(e +¿¿ ) and two mysterious particles named neutrinos(ϑ e).
Two neutrinos are produced for each time fusion reaction occurs and as 4 proton in total is
heavier than the product of reaction so lot of electromagnetic energy or sun luminosity
produced that actual sun’s shining. Those neutrinos have zero electric charges, huge
penetration ability, and of course massless. Nuclear fusion in the sun’s interior produces the
neutrinos with electrons. So they are called as electron neutrinos (ϑ e). Except the electron
neutrinos, two other neutrinos named tau neutrinos (ϑ τ ) and moun neutrinos (ϑ μ) are
produced. in 1964, Nobel laureate Raymond Devis Jr. experimented and found neutrinos
calculating neutrinos produced radioactive argon 37Ar within chlorine-based cleaning fluids.
The number of neutrinos was found in this experiment very much lesser than the expected
value. So far few neutrinos were not found. After the discovery of a smoking gun , the
difference between the total number of neutrinos and electron neutrinos is easily explained.
Ultimately this missing neutrino was experimentally solved in Sudbury Neutrinos
Observatory, Ontario, Canada in 2001, 18th
June after using 1000 metric-ton ultra-pure
D2O. Measurement of the total number of
neutrinos n, this SNO detector provided the
fingerprint of the smoking gun. The
neutrinos have been created as electron
neutrinos but they change themselves in
types in the way to the earth. This conversion process is known as quantum mechanical
process or neutrino oscillation. In 1990, December, experimented the Kamiokande III and
showed no anticorrelation between the sunspot number and 8B neutrino flux data obtained
from the experiments of Superkamiokande II and III.
In the SSM neutrino flux from the sun was calculated on the assumption of Lneutrino (neutrino
luminosity) = Lg (optical luminosity), which implies that if there is a change in the optical
luminosity then solar neutrino flux will also be changed i.e., neutrino flux will be variable
within the solar cycle. In this connection, it may be mentioned that a perturbed solar model
can account not only the neutrino flux variability but also the solar irradiance variability
within the solar cycle and effects of solar flares. The solar neutrino flux detected by
Homestake, Kamiokande- Superkamiokande, SAGE, GALEX-GNO detectors are variable in
nature. The periodicity of the solar neutrino flux is also compatible with the periodicity of the
other solar activities i.e., sunspots, solar flares, solar proton events, solar irradiances (E>10
MeV) etc.
4. SOLAR PHENOMENOLOGY
Solar Activity Indices: The detailed study of active regions of the Sun is the prime importance of
astrophysics research. The formation and decay of this magnetic field in the Sun’s atmosphere is
responsible for the different solar activity indices like Solar Radio Flux, Solar Flare, Coronal Mass
Ejection, Solar Eruption, Sunspot etc. According to the Solar dynamo theory, the dynamo process
operates on the dynamics of the magnetic field inside The Sun//
So, analysis of different solar activity indices is most important to examine Solar Physics as well as
the Solar-Terrestrial relationship which is the fundamental component of the solar astrophysics. Some
of the important solar activity indices are listed below:
● Sunspot Number (SSN): The Solar dynamo process is mainly governed by the fluctuation of
the internal magnetic field. Several methods are present to measure that dynamo process. The
sunspot number is one of the primary indicators among them to quantify the strength of that
dynamo process (Waldmeier, 1962). The sunspot is nothing but a dark spot on the
photosphere region of the Sun due to the resultant magnetic field. The mean diameter and
temperature of the sunspot is around 37,000 km and 4,600 k respectively. During 19 th century,
Rudolf Wolf first initiates the knowledge of sunspot number at Zurich observatory which is
also known as Wolf or Zurich Sunspot Number and expressed as
R Z =k ( 10 g +n ) ( 1.8 )
Where g indicates the sunspot group number, n indicates sunspot number in an individual
sunspot group and k is the correction factor whose value mainly depends on methods of
observation, as well as instrument, used to measure (Usoskin & Mursula, 2003a; 2003b). A
zero sunspot number indicates the bright Sun i.e. no dark spots in the Sun’s outer surface. The
structure of a typical sunspot is shown in Fig. 1.6. where the black spot at the centre is known
as Umbra and another dark spot is called Preunbra.
Fig. 4.1: The pictorial view of a typical Sunspot (Image courtesy: NASA)
The periodic variation of that sunspot was detected by Schwabe during 1843 and recognized
as Solar Cycle or Schwabe Cycle. The length of that cycle is computed by taking the
difference between two consecutive solar minimums. The mean length of a solar cycle is 11
years but it can vary from 9 to 12 years. Hale introduced 22 years of the period due to
magnetic reversal, which named as Hale Cycle. Another period of 78 years is known as
Geissberg Cycle. The sunspot number from 1755 to 1766 is considered as Solar Cycle 1. The
details of each Solar Cycle is listed in Table 1.1.
● Sunspot Area (SA): The net total area of each and every sunspot on the Sun’s surface is
known as the Sunspot area. Like sunspot number, this index is also treated as a physical
parameter to describe the solar activity as it relates with emerging magnetic field around the
sunspot. This index is calculated
in ppm (parts per million) from
the sunspot images with 672.3 nm
resolution. The daily value of this
index is computed by the Royal
Greenwich Observatory based on
their images captured at
Greenwich, England from 1874.
The sunspot area is also
calculated from the observation at
Kodaikanal, India.
Table 4.1: The details of each and every Solar Cycle
● Solar Radio Flux (F10.7): The radio frequency emission at 10.7 cm (or 2800 MHz)
wavelength among all other wavelength is selected because of its high correlation with solar
extreme ultraviolet radiation as well as complete and long observational record other than
sunspot related indexes . The 10.7 cm solar radio flux data originates in the lower corona and
chromospheres region of the Sun. The mutual relationship between Sunspot number and solar
radio flux at 10.7 cm was discovered by Floyd et al. (2005) which are stable for 25 years. But
during solar cycle 23, this
relationship was changed
due to Gnevyshev Gap
(Bruevich et al., 2014). A
nonlinear correlation
between Sunspot number
and radio flux at 10.7cm
was analysed during solar
cycles 18-20. To attain the
best linear approximation,
two methodologies were
proposed by Vitinsky et al.
(1986): (1) for lower radio flux (F10.7< 150 sfu) and (2) for higher radio flux (F10.7> 150
sfu). [1 solar flux unit (sfu) = 10-22 W m-2 Hz-1]. There are two major factors of the 10.7 cm
solar radio flux emission: one is rotationally modulated, and another is unmodulated. The
unmodulated solar radio flux emission originated from outside active regions. The prime
reason behind that is mostly for thermal bremsstrahlung, which dominates the Sun during
solar minimum. The rotationally-modulated 10.7 cm (or 2800) MHz solar radio emission is
responsible mostly for the emission of thermal gyroresonance which is originated from the
magnetic fields above sunspots. The solar radio flux at 10.7 cm has a higher degree of
correlation with all other solar phenomena which indicates the dependence among different
plasma parameters and their sources. So, solar radio flux is a magnificent indicator of major
solar activity .Also, solar radio flux at 10.7 cm wavelength plays a very valuable role in
forecasting space weather because it is originated from the lower corona and chromospheres
region of the Sun. Detailed research work about solar radiation model discovered that the
principal characteristic of energy particle spectra contributes to solar radio flux .
The daily F10.7 index data is measured on 2800 MHz centre frequency with a bandwidth of
100 MHz using two automated radio telescopes named flux monitors at Dominion Radio
Astrophysical Observatory in Penticton, British Columbia.
● Mg II core-to-wing Ratio (MgII): Mg II core-to-wing ratio is a good solar activity index in
terms of solar radiation. Mg II core-to-wing ratio has a good correlation with solar extreme
UV emission than 10.7 cm solar radio flux. It can be utilized as a proxy other than 10.7 cm
solar radio flux to model the solar extreme UV emission during several solar cycle (as the
correlation between Mg II core-to-wing and 10.7 cm solar radio flux data is around 0.99 over
the entire data set. The US Air force has also shown that the long-term RMS error is reduced
by 20 – 40% in their satellite by replacing 10.7 cm radio flux data with Mg II c/w ratio data.
Mg II c/w ratio is being measured by various instruments of ESA and NASA satellites like
SUSIM, SOLSTICE, GOME, NOAA9, NOAA11 etc. at different time scales. The
correlations between different data sets from those instruments appear as around 0.986 to
0.996. Those time series data were merged to construct a single time series using linear
scaling. Mg II index is very similar to the Ca II K index, as this index is formed by taking the
ratio of the core line (near 2800 Å) to wing line (near 2767 and 2833 Å) emission. But to
measure the solar variability from the chromosphere region of the Sun, Mg II c/w ratio plays a
significant role as h and k lines of Mg II index is very much stronger than Ca II line).
Actually, the resonance lines of Mg II are formed at plasma
temperature around 10000−15000 K and they deliver important
information from the photosphere out to the higher parts of the
chromospheric plateau. Mg II lines effectively represent
prominence-to-corona transition region (PCTR) between the cool
and dense core and the hot corona.Thus a continuous composite
single time series Mg II c/w ratio is very much important to the
scientific community to derive solar extreme UV emission.
● Solar Flare (SF): A solar flare is an intense and rapid brightness variation in the Sun’s
atmosphere. The sudden release of magnetic field energy from the solar atmosphere due to
reconnection of the magnetic field in the coronal region is the main source of solar flare
(Sturrock, 1968; Benz, 2017). The emitted radiation during a flare varies in a wide range of
electromagnetic radiation spectrum from radio to X-ray and gamma rays. The total energy
released from a solar flare is around 1032 ergs, which is millions of 100 megatons of hydrogen
bombs equivalent energy. R. C. Carrington and R. Hodgson first recorded the solar flare in
the white light images of the Sun in 1859. The solar flare is described by a complex process
both in spatial and temporal domain ranging from chromosphere region to corona of the Sun ‘
During the flaring activity, solar energetic particles such as protons, electrons and heavy
nuclei are also released and accelerated towards the solar atmosphere. Those flares can last
either for a few hours or for few seconds. The effect of a solar flare on the technological
instrument in space is much more important as compared to other solar activity. At the same
time, it also affects the living things on the Earth. Although the impact of associated coronal
mass ejection is minimized by the magnetic field of the Earth, but the associated X-ray flare
emission disturbs the ionosphere region of the Earth and the UV radiation increases the
temperature of the Earth’s atmosphere. So, the solar flare is considered as one of the most
prominent research topics in the domain of solar physics as well as space weather. A bright
Solar flare in the solar active region is shown in Fig 1.7. The concept of Solar Flare Index was
first discovered by Kleczek (1952) as FI = I×T which is roughly proportional to the net
emitted flare energy. In the above relationship, I symbolize the scale of intensity expressed in
Table 1.2 and T represents the time span (in a minute) of flare in Hα alpha flux.
Classification of the solar flare index is made on the basis of their brightness and importance.
The importance is categorised as 1, 2, 3 or 4 according to size of the flare expressed in Table
1.3 whereas brightness is defined as B= bright, N = normal and F = faint in terms of emission
intensity. The calculated data sets are available at the web page of Kandilli Observatory
(http://www.koeri.boun.edu.tr/astronomy) as we as National Geophysical Data Center
(NGDC) (ftp://ftp.ngdc.noaa.gov/STP/SOLAR_DATA).
Fig 4.2: A bright solar flare in the active region of the Sun (Image courtesy: NASA)
Importance I Importance I
SF, SN, SB 0.5 2B 2.5
1F, 1N 1.0 3N, 3F, 4F 3.0
1B 1.5 3B, 4N 3.5
2F, 2N 2.0 4B 4.0
Table 4.3: The importance according to Size of the flare in Hα alpha flux
From these definitions we shall detail the relations that connect the constituent elements of a
spherical triangle, the main object of study of spherical trigonometry, the mathematical technique
used in the treatment of observations and whose fundamental concepts are presented in this chapter.
We shall call Spherical Astronomy the area of astronomy that involves the solution of problems on
the surface of the celestial sphere. One of the main applications of the formulae of spherical
astronomy is to obtain the relations between the several coordinate systems employed in astronomy.
Selecting a
coordinate system depends on the problem to be resolved, and the transformations between the
systems allow the measurements done in one system to be converted into another. These
transformations can be obtained using either spherical trigonometry or linear algebra. Another
interesting application are the transformations from one coordinate system to another, with origin on
the center of Earth, into a coordinate system centered on other planets, spacecraft, or the barycenter of
the Solar System, which is specially useful for the study of positions and motion of objects in the
Solar System.
• Any plane that passes through the center of the sphere intercepts the sphere in a Great Circle.
• Any circle, resulting from the intersection of the sphere with a plane that does not pass by the
cent er
, is called a small circle
O
A
B C
A
\
P
B’
A’
O
A B
Figure: The spherical angle C is the dihedral angle between the planes that cross the sphere at the arcs
AP and PB. It is also defined as the angle between the tangent lines to the arcs at the intersection point
P.
Corollary
For a sphere of unit radius, the arc of great circle connecting two points on the surface is equal to the
angle, in radians, subentended at the center of the sphere (Fig. 1.3). This comes directly from the
definition of length on a circle. The length of the path c is c = Rψ , where ψ is the angle AOˆB in the
figure. If R = 1, then and c have the
same units, and c = ψ.
Poles
The poles (P, P0) of a great circle are intersections of the diameter of the sphere, perpendicular to the
great circle, with the spherical surface. The poles are antipodes (diametrally opposed points), i.e., they
are separated by arcs of 180#.
Spherical angle
The angle generated by the intersection between two great circle arcs is the angle between their planes
(dihedral angle), and is called spherical angle (Fig. 1.4). The spherical angle can also be defined as
the angle between the tangents (PA’, PB’) to both arcs of great circle in their intersection point
.Elements: The sides of the spherical angle are the arcs of great circle and, in their intersection, we
have the vertex. In the figure, the sides are PA and PB, and the vertex
Coordinate Systems
Fundamentals Glossary
Zenith: point directly overhead of the observer.
Antipode: point diametrically opposed to another on the surface of a sphere.
Nadir: antipode of the zenith.
Meridian: North-South line, passing by zenith.
Diurnal motion: the east-west daily rotation of the celestial sphere. A star path in diurnal motion is a
small circle around the celestial pole.
Culmination: the meridian passages of a star in its diurnal motion. There are upper and lower
culminations.
Celestial equator and Ecliptic: two fundamental great circles defined in the celestial sphere. The
celestial equator is the projection of Earth’s equator on the celestial sphere. The ecliptic is the
projection of the Earth’s orbit in the celestial sphere. From our point of view, it is the path traced by
the annual motion of the Sun in the sky.
Equinox : from Latin aequinoctium, equal nights. The instant when the Sun is at the celestial equator,
thus day and night have equal duration.
Solstice : from Latin solstitium, sun stop. The instant when the Sun reverts its north-south annual
motion, defining either the longest or shortest night of the year.
Vernal Point: one of the intersections between the celestial equator and the ecliptic. Also called
Vernal Equinox, or First Point of Aries. The antipode of the vernal point is the Autumnal Point, also
called Autumnal Equinox or First Point of Libra.
Sidereal: with respect to the stars (from Latin sidus, star). A sidereal period refers to the time taken
to return to the same position with respect to the distant stars.
Synodic: with respect to alignment with some other body in the celestial sphere, typically the Sun
(from Greek sunodos, meeting or assembly). A synodic periodtipically refer to the time taken to return
to the same position with respect to the Sun. For the Sun, another reference in the sky is used
(generally the meridian for the synodic day, or the vernal point for the synodic year).
The zenital distance z = 90 − h is often used instead of altitude. Stars at same altitude define an
almucantar. An almucantar is a small circle parallel to the horizon.
coordinates)..
Hour coordinate system. The hour angle is measured from the meridian to the hour circle of the star.
The hour angle is zero when a star culminates and increases from 0h to 24h. Stars in the same hour
circle have the same hour angle.
Spherical triangle
A spherical triangle is the figure formed by arcs of great circle that pass by 3 points, connected by
pairs, that intercept at the surface of a sphere. A Eulerian spherical triangle has each side and angle
less than 180#.
Corollary 1: Three points that do not belong to the same great circle define a plane that does not pass
by the centre of the sphere.
Corollary 2: The sphere can be divided in a way that the 3 points are always in the same hemisphere.
So, the length of each angle in the spherical triangle cannot be more than 180 O.
Corollary 3: A spherical triangle has only great circle arcs. It cannot be formed by arcs of small
circles.
The spherical triangle has 6 elements: 3 angles usually referred to by capital letters (ABC) and 3
sides, opposed to the angles, referred to by lowercase letters (BC = a, CA = b, AB = c).
The vertices of the spherical angles are the
vertices of the spherical triangle. The sides
(AB, BC, CA) are the arcs of the three great
circles. The angles (A, B, C) are measured by
the dihedral angles.
Properties
The following properties are valid for Eulerian
triangles, i.e., those for which each side
or angle does not exceed 180#.
1. The sum of the three sides of a spherical
triangle is between 0O and 360O (2π).
0 O
< a + b + c < 360
2. The sum of the three angles of a spherical
triangle is greater than 180O (π) and smaller than 540O (3π) 180O < A + B + C < 540O
3. One side is greater than the di↵erence of the two others and smaller than the sum of two other sides.
|b − c| < a < b + c
4. When two sides are equal, the two opposite angles are also equal and vice-versa. a = b <=>A = B
5. The order in which the values of the sides of a spherical triangle are distributed is the same in
which the angles are distributed a < b < c <=> A < B < C
5. INSTRUMENTATION
Telescope
Click here to remember lense operation
" https://www.youtube.com/watch?v=EL9J3Km6wxI
Telescope category HYPERLINK "https://www.youtube.com/watch?v=_v1RWyzQAng"
https://www.youtube.com/watch?v=_v1RWyzQAng
Https://Www.Youtube.Com/Watch?V=_V1rwyzqang
Introduction The night sky always attracted people by its charming mystery. Observers had been
using naked eyes for their explorations for many centuries. Obviously, they could not achieve a lot
due to eyesight limitations. It cannot be estimated, how important the invention of telescopes was
for astronomers. It opened an enormous field for visual observations, which had lead to many
brilliant discoveries. That happened in 1608, when the German-born Dutch eyeglass maker had
guessed to combine several lenses and created the first telescope [PRAS]. This occasion is now
almost forgotten because no inventions were made but a Dutchman. His device was not used for
astronomical purposes, and it found its application in military use. The event, which remains in
people's memories, is the Galilean invention of his first telescope in 1609. The first Galilean optical
tube was very simple, it could only magnify objects three times. After several modifications, the
scientist achieved higher optical power. This helped him to observe the venusian phases, lunar
craters and four Jovian satellites. The main tasks of a telescope are the following:
The main parts of which any telescope consists of are the following:
• Primary lens (for refracting telescopes), which is the main component of a device. The bigger the
lens, the more light a telescope can gather and fainter objects can be viewed.
• Primary mirror (for reflecting telescopes), which carries the same role as the primary lens in a
refracting telescope. • Eyepiece, which magnifies the image.
• Mounting, which supports the tube, enabling it to be rotated. Telescopes can be divided into two
main categories: refractors and reflectors. Refracting telescopes
The history of the refractor’s inventions was discussed in the introduction part. Refractor is the
simplest type of telescope, which combines two lenses at the ends of a tube. As mentioned above,
the main component of this type of telescope is the primary lens – the objective. A concave lens is
used as an objective of a refractor. It defines how faint objects can be viewed. The eyepiece, another
concave lens, is placed at the other end of the tube. It defines the magnification of an object
observed. The appearance and a schematic view of refracting telescope are shown in Figure below..
Reflecting telescope This is a later type of telescope, firstly proposed by Giovanni Francesco Sagredo
(1571 – 1620) who suggested that a curved mirror could be used instead of a lens. The first reflecting
prototype was created by Isaac Newton in 1669. These telescopes employ parabolic mirrors to
gather light from distant space sources and several auxiliary mirrors that direct the path to the
eyepiece. The appearance and a schematic view of the reflecting telescope (Newtonian design) are
shown in Figure below.
Principle of working and main parameters The light from a source reaches the primary lens (or
primary mirror), converging at a focal point of a system. Depending on the design, the light either
enters the eyepiece (refractors) or is reflected outside the main tube towards the ocular lens
(reflectors). The angular resolution is defined by the diameter of the main lens or mirror, which is
referred to as an aperture D, measured in millimetres. For the visible light, this limit is described by
the following formula:
Sin α = 138/ D ,
Each lens or mirror has their special feature known as focal length. If the focal length of the objective
is F and the focal length of the eyepiece is f, then the magnification of the image produced by the
optics is described by the following simple formula:
Μ = F/ f
As mentioned before, the bigger the aperture, more light the optics can gather. More light enters
the telescope, the fainter objects can be viewed. The limiting magnitude of a telescope can be found
according to the following formula:
Mountings There are two main classes of mount: altitude-azimuth and equatorial. Telescopes with
the first mount are aligned with the local zenith. Experiencing symmetrical gravity forces, the
telescopes are normally less massive. The disadvantage is that the device should be moved in two
directions simultaneously in order to track the object. The equatorial mount is aligned with the polar
axis. It requires a heavy counterweight to balance the telescope. The advantage of this mount is that
the tube has to be rotated along one polar axis only for following the sky. This configuration is much
more convenient for tracking objects.
The quality of the views produced by optical telescopes is limited by optical defects - aberrations.
Refractors suffer from chromatic aberration more than from other deviations. This is the result of
the difference of the speed of light of various wavelengths in the medium. Red light is refracted
more than blue one.
As a consequence of this effect, a star is rounded by colourful concentric rings of light. A corrective
lens is placed on top of the primary objective to reduce chromatic aberration. Other types of
aberrations are monochromatic. They are spherical, coma, astigmatism, distortion and field
curvature. Spherical aberration results from the special spherical surface feature – it focuses on a
line, not a point. Therefore, rays from the centre of a mirror focus farther from it than rays from the
edges. This is fixed by employing parabolic mirrors. The rest of the aberrations are off-axis, they
depend on the field angle.
Space Flight Particle Instrument
A Space Flight Particle Instrument is a type of sensor or detector that is used to measure the
presence and properties of particles in the space environment. These instruments can be used
on a variety of space-based platforms, including satellites, space probes, and manned
spacecraft. Some examples of space flight particle instruments include:
Cosmic ray detectors: These instruments measure the energy and charge of cosmic
rays, which are high-energy particles that travel through space.
Solar particle detectors: These instruments measure the energy and charge of particles
emitted by the sun, such as solar flares and coronal mass ejections.
Plasma instruments: These instruments measure the properties of the charged particles
that make up the solar wind and other space plasmas, such as temperature, density,
and velocity.
Magnetometer: This instrument measures the strength and direction of the magnetic
field in space.
Dust detector: This instrument measures the presence and properties of dust particles
in the space environment.
Charged particle detector: This instrument measures the presence and properties of
charged particles such as electrons and ions in space.
These instruments are important for understanding the space environment and the effects of
solar activity on the Earth's environment. They also help to protect spacecraft and astronauts
from the hazards of space radiation.
A typical space flight particle instrument consists of several elements (Figure below).
Which of the elements are present depends on the particular instrument technique and
implementation. First, there invariably is a collimator or gas inlet structure. This structure
essentially defines the field of view and shields the subsequent sections from unwanted stray
particles, photons and penetrating radiation. Following this section, in neutral particle
instruments only, there is an ionization element to convert the neutrals into ions that are
amendable for further analysis by electromagnetic fields or, when appropriate, by time-of-
flight techniques. After the collimator and ionization sections, there is an initial analyzer,
such as a solid-state detector or an electrostatic analyzer, where the charged particles are
filtered according to their energy per charge. This may be followed by a second analyser
section that performs ion mass discrimination. Finally, the particle encounters a detector that
converts the arrival of the particle (and often its energy) into an electric signal that can be
further processed in a signal processing section. The resulting digital data is passed to the
spacecraft data handling system and relayed to the ground via regular spacecraft telemetry.
On the ground, the raw data is further processed to obtain physical parameters. Mission
design, spacecraft design, hardware design choices as well as software compression and
binning schemes affect the performance of the instrument. Starting from what environment
needs to be studied (density, species, characteristic velocity if any, characteristic Mach
numbers if any, temperature (T), pressure tensor, distribution function knowledge, intrinsic
time of phenomenon, boundary types and characteristic lengths) measurement requirements
are derived such as geometric factor, signal to noise, mass range, mass resolution, energy
range, energy resolution, field-of-view, energy/angle resolution, the time resolution of the
measurement, and analyser type.
In spite of the fact that space particle instruments have been constructed in a wide variety
of geometries and using many combinations of particle energy, charge state, particle mass,
and species analysis, there are in fact only a few basic techniques that exist for selecting
particles with specific properties.
These are analysis solely by static electric fields, analysis solely by magnetic fields,
analysis by combinations of electric and magnetic fields, analysis by time-varying electric
fields (sometimes in combination with static magnetic fields), analysis by determining a
particles time-of-flight over a fixed distance, and analysis by determining a particle’s rate of
energy loss through matter.
Contemporary space flight instruments almost always use either open window electron
multipliers or silicon solid-state detectors to detect those particles that are passed by the
various analyser elements. Determining the performance of such particle detectors is critical
to the overall instrument laboratory calibration because their post-launch stability is always
an important factor. Since a Faraday cup is sometimes used as an integrating current detector
in a few particle detector systems and often forms an important element in laboratory
calibration facilities, the design and operation of a Faraday cup is also discussed. The basic
principle behind each of these analysis techniques is briefly described in this chapter. Each
section on the analysis principle usually contains a more detailed description of specific
instruments in order to provide background for the material on the calibration and in-flight
performance verification of those instruments that appear in subsequent chapters. Whenever a
specific instrument involves a special feature, for example, unusual collimator design or a
process to convert a particle from a neutral to an ion, that feature is highlighted.
Error Sources
Sources of errors or uncertainties in the measurements and the derived physical quantities are
numerous. If the goal is to compute fluxes or distribution functions, then uncertainties arise from
• Uncertainties in geometric factor,
• Degradation of detector efficiency, 1.5. Error Sources 9
• Degradation of analyzer voltages,
• Out-of-band response,
• Sensitivity to solar UV,
• Dead-time effects at high count rates,
• Poor counting statistics at low incident fluxes,
• Aliasing caused by time variations in the incident particle population.
Some of these uncertainties are introduced by imperfections in design and/or calibration.
Calibration concerns primarily the total geometric factor of an instrument, including detector
efficiency, or the determination of the energy- and angle-passbands.
Any degradation in detector efficiency after calibration introduces (unknown) uncertainties if such
changes remain undetected or cannot be quantified. Similar uncertainties are introduced if the
voltages applied to define the energy- and/or angle-passbands degrade in some unknown fashion.
Any responses to solar UV, or to particles outside the primary energy-/ angle-passbands can in
principle be determined through extensive calibration, but complicate the conversion to meaningful
quantities, and thus fall more into the category of design-driven uncertainties. Dead times in the
detector or its associated electronics introduce losses in counts at high count rates that can be
calibrated out in some limited sense only. Low values of the counts accumulated per sampling interval
introduce uncertainties that have nothing to do with calibration but are a fundamental experimental
limitation. Poisson statistics guarantee that their relative uncertainty decreases as one over the
square root of the counts. A design with increased geometric factor is not a solution for this problem
if the detector would then be saturated in other, high-intensity environments. Time variations in the
incident particle distribution that occur within the accumulation time of the measurements are an
obvious source of errors as well. Speeding up the sampling solves this problem only if the statistical
error resulting from the reduced number of counts per sample remains adequate.
If the goal is to compute moments of the particle distribution function, then additional errors
arise from
• Limited energy range and/or energy resolution,
• incomplete angular coverage and/or resolution,
• spacecraft charging.
Obviously, if an important part of the incident distribution is not measured, because it falls outside the
energy- and/or angle-range of the instrument, one cannot expect the moments, for example, the
particle number density, to correctly represent the incident population. Spacecraft charging presents a
special problem. If it is such that it attracts the particles of interest (negative in case of ions, positive
in case of electrons), then it increases the energy of the incident particles. This energy increase can be
corrected for if the value of the potential is known. However, if the sign of the potential is such that it
retards the particles of interest, then there might be particles in the incident distribution that can no
longer reach the detector, with obvious consequences that cannot be corrected.
Important Characteristics of Analysers
When selecting an instrument for a particular mission or comparing different plasma
instruments certain key parameters have proven to be very useful. These are: energy or
velocity range, the field of view, velocity space resolution, and geometric factor that
determines the sensitivity and temporal resolution. Also to consider are the temporal
resolution for a two-dimensional and for a three-dimensional cut of the velocity phase space.
Equally important are resources that the instrument requires from the spacecraft such as mass,
power, size and telemetry rate. Charged particle optics makes many references to photon
optics such as spectrograph, spectrometer, fringing fields, and aberration. For example, a
cylindrical ESA is a charged particle optics analogue of the scanning spectrograph in photon
optics. But there is an important difference between charged particle optics and photon
optics: There is the interaction between optical properties and dispersion in charged particle
optics.
Detectors
There are relatively few detector types used in space physics to detect particles, either
charged or neutral. These include Faraday cup devices to measure the current associated with
charged particle distributions, windowless electron multipliers such as channel electron
multipliers (Channeltrons) and microchannel plates that may be operated in either a pulse
counting or an integrated current mode, and solid-state or scintillation detectors used for
higher energy particles. The succeeding sections discuss each of these particle detection
technologies.
Faraday Cups
Faraday cups are generally simple to construct and are fast, accurate, current collectors.
These collectors are connected directly to current measuring devices, and current
measurements as low as 10−15 A are possible with modern electrometers. Measurement
accuracy of a Faraday cup is affected by a series of secondary processes created by particles
impacting onto a cup, such as emission of secondary electrons and secondary ions or
reflection of charged particles,
collection of electrons and
ions produced near the cup
(for example produced by
ionization of the residual gas
or produced at the aperture
structure), current leakage to
the ground, formation of
galvanic elements due to the
use of different materials and
the penetration of particles
through the cup structure.
Escaping secondary electrons
are minimized by a suppressor
grid biased to about ~30 V
placed directly in front of the
collector plates or by biasing the cup together with the measuring electronics and by
geometric design where the collector plates are mounted at the end of a long high aspect ratio
tube such as a cylindrical tube.To measure very low ion currents an additional shielding
cylinder should be used to screen the Faraday’s cup from stray ions or electrons. When taking
the proper design precautions, Faraday cups are well suited for absolute current
measurements because they are not affected by the same gain degradation as channel electron
multipliers or multichannel electron multipliers.
The Rosetta/ROSINA double-focusing mass spectrometer includes a Faraday cup in that
instrument’s detector system in addition to microchannel plates and Channeltrons. The long-
term stability of the Faraday cup provides the absolute calibration for the other detectors that
may suffer degradation with time, as well as providing measurements during times of
exceptionally high fluxes. Faraday cups also serve the very important function of particle
beam monitors in laboratory calibration facilities
Microchannel Plates
Microchannel plates (MCPs) are electron multipliers that are used in a variety of scientific and
technical applications, particularly in the field of vacuum and low-light-level detection. They consist
of a stack of thin glass plates that have been etched with a large number of tiny channels, typically
between 10 and 100 micrometers in diameter. These channels are typically arranged parallel to one
another and are separated by thin walls called septa. When an electron or other charged particle strikes
the surface of an MCP, it causes a cascade of secondary electrons to be emitted from the walls of the
channels, which in turn generates a much larger number of electrons. This process is known as
electron multiplication.
An example of the use of SSD’s and scintillators in a flight instrument is shown in Figure
below. The instrument consists of four SSD’s (D1–D4), two inorganic scintillators (S1 and
S2) and plastic scintillator (S3). The SSD’s are used to define the field-of-view of the
instrument. The particle energies of interest are so high (up to 400 MeV) that no collimation
or shielding is possible. The two inorganic scintillators are composed of a dense material,
Gadolinium Silicate (GSO), and their function is to absorb as much energy as possible from
the incident high energy particles. Finally, the S3 plastic scintillator is used as veto for out-of-
aperture particles striking S1.
One advantage of inorganic scintillators is their high stopping power for spectroscopy of high
energy particles. Organic scintillators have very short emission times, making them excellent
choices for use as a veto in coincidence circuits. Finally both scintillator types can be
physically shaped to meet design requirements. One disadvantage of scintillators is the
relatively poor energy resolution relative to SSDs. This is due to 1) much greater energy
required to produce a scintillation photon (40–50 eV) than the energy required for the
production of electron-hole pair in silicon (3.6 eV) and 2) inefficiencies in transporting the
photons to a light measuring device. Another disadvantage is that scintillator light is usually
measured by photomultipliers. These are relatively large devices and this makes it difficult to
design, package and shield the sensor. In the last few years, with the development of fast,
high light output scintillators it has become feasible in some cases to use photodiodes and
avalanche photodiodes, which resolves the issues connected with photomultiplier size.
Langmuir Probes
Langmuir probes (LP) have been used extensively on rockets and satellites to measure
ionospheric electron and ion densities, electron temperature, and spacecraft potential. This
section discusses the design and implementation of LP measurements, with particular
emphasis on cylindrical probes that have been used more extensively than any other type.
The key lesson of more than three decades of LP use in space is that the accuracy of the
measurements depends primarily on avoiding implementation errors. Experience with
Langmuir probes since 1959, has shown that most measurement errors arise from: the type of
collector surface material used, failure to avoid surface contamination or failure to provide
for inflight cleaning of the collector, failure to place the collector an adequate distance from
the spacecraft and
from various
appendages that
might interfere with
its access to
undisturbed plasma,
failure to design the
electronics to
adequately resolve
those portions of the volt-ampere curves that contain the desired geophysical information,
and failure to assure that the spacecraft can serve as a stable potential reference for the
measurements. \
Langmuir Probe Technique (https://www.youtube.com/watch?v=u44NH1o6Tp8 )
The LP technique involves measuring the current to a probe as a function of an applied
voltage. The current is the sum of the ion, Ii , and electron, Ie, currents collected by the probe.
The voltage is applied to the probe with respect to the satellite reference potential. With
careful spacecraft design, the applied potential is proportional to the voltage between the
probe and the undisturbed plasma being analyzed. The resulting current-voltage
characteristic, called the “volt-ampere curve” or “V-A curve” is a function of the plasma
parameters, electron density, Ne, electron temperature, Te, ion mass, mi , and ion density, Ni,
as well as the probe surface properties, and the probe geometry and orientation relative to the
spacecraft velocity, magnetic field vector, and spacecraft body. Simple Langmuir probe
theory [Mott-Smith and Langmuir, 1926] shows that the amplitude of the electron current, Ie,
is proportional to Ne, and the amplitude of the ion current, Ii , is proportional to Ni . The
current of retarded particles is proportional to the exponential of voltage, V, divided by
temperature,
Thus the logarithm of the retarded electron current is inversely proportional to the electron
temperature, and the V-A curve is a strong function of temperature.
pic: Block
diagram of the
Langmuir probe
instrument and a
theoretical V-A
curve.
Many years of experience indicate that the accuracy of the measurements depends on the
details of the implementation. The factors most critical to success involve; (1) using a
relatively short probe that has inherently low surface patchiness and that can be cleaned very
early in the mission by electron bombardment, (2) mounting the probe on a boom that is long
enough to place the collector in the undisturbed plasma that lies beyond the spacecraft ion
sheath, (3) using adaptive circuitry in the electronics to resolve the V-A curves over a wide
range of Ne and Te values, (4) designing the spacecraft to have an adequate conducting area
and a solar array that does not cause Vp to be excessively negative. The degree of success
achieved can be determined by a series of internal consistency tests. First, the volt-ampere
curves should exhibit the form indicated by the theory. The ion saturation regions should be
approximately linear and have a slope that is consistent with the known mean ion mass in the
region. The electron saturation region should exhibit the expected voltage dependence. The
electron retarding region should be truly exponential over several kTe. The Ne measurements
should be consistent with the Ni measurements at densities where the two techniques overlap,
recognizing that end effects (for short probes) will tend to cause Ne to be slightly higher than
Ni. No hysteresis should be evident in the curves when the probe voltage is swept in opposite
directions, and no time constants should be evident in Ii immediately following the sweep
retrace. These criteria for internal consistency are very demanding and, if met, one can gain a
high degree of confidence in the measurements. It can be concluded that Langmuir probes
can provide accurate ionosphere measurements when several important implementation
challenges are successfully addressed.
6. HUBBLE TELESCOPE :
The sophisticated hardware/so
f aware of the Hubble Space
Telescope is being managed,
developed, integrated, tested,
and operated by a complex
organization of participants. To
achieve the scientific results
expected from the observatory,
this team of highly qualified
participants must interact
effectively and implement
systems engineering discipline
into every aspect of their
responsibilities and effort.
The Hubble Space Telescope is comprised of three major modules as depicted in Figure 1:
The Optical Telescope Assembly (OTA),
The Support Systems Module (SSM), and Scientific Instruments (SI's).
The OTA and SI's contain all of the optical elements
and scientific payload; whereas the SSM provides all necessary spacecraft functions like electrical
power and management, communications, data management, precision pointing control, stray
light protection and structural support between the OTA, SI payload and the STS. Module system
configurations and performance are the results of scientific mission objectives and requirements
constrained by the STS and orbital operational considerations.
The mission objectives and optical system requirements are mutually supportive; the optical
requirement is derived from the mission objectives. For example, the optical wavefront
quality of A/20(wavelength) will provide a performance level that is essentially limited by
diffraction.
This results in an angular resolution capability that is a function of the telescope aperture and
incoming wavelength. The telescope aperture was essentially fixed by considerations of
optical performance requirements and STS payload bay diameter constraints. The payload
bay could accommodate a telescope with an aperture as large as 2.4 m. In turn, this size
aperture yields a diffraction-limited image of 0.06 arc-seconds in size without considering
any distortion from pointing instability. Judgment indicated that the 0.06 arc-seconds of
angular resolution would allow a
Arcseconds
After chopping a circle up into 360 equal slices
(degrees), we took one of those slices and
chopped it up into 60 tiny slices (arcminutes).
Now we need to take one of those slices, and
chop it up into 60 even smaller slices. Each of
reasonable margin of safety for the primary optical system to offset some pointing instability
and manufacturing deficiencies. Specifically, this amount of margin would support the 0.1
arc-seconds resolution requirement in a mirror quality range of A/20 down to A/13.5.
However, if A/20 or better was achieved then this margin could be applied as a relaxation to
pointing stability. Since a final figure quality of A/19.2 was realized, it was decided to retain
the pointing stability at the initial requirement of 0.007 arc-seconds (RMS).Another
fundamental reason for selecting an overall system with a focal ratio of F/24 was the physical
impracticality of attempting to accommodate five scientific instrument apertures within a
focal plane of a smaller area and at the same time achieve sufficient clearances between
instruments to support in-orbit replacements. The broad spectral coverage (1216 angstrom -
1000microns) requirement influenced the primary optical system in two important ways. It
prescribed that an all-reflective system be used and that Magnesium Fluoride be applied as an
overcoating material for the reflective aluminium mirror coatings. A refractive optical system
could not be designed to accommodate this required spectral range. The overcoating material
selected was capable of passing the shorter wavelength while also being one of the more
moisture resistant materials available. The HST is a Richey-Chretien design. To achieve the
A/20 wavefront quality for the total optical system, the primary mirror was allocated an A/60,
the secondary mirror was allocated a figure quality of A/lOO, and the relative alignment
integrity between them had to be maintained within a ±2 micron envelope. The figure
qualities of the mirrors were manufacturable but challenging with available current
technology; however, the severe constraints on total system alignment indicated that the
sophisticated structural support systems required a dynamic real-time misalignment
correction mechanism. To fill this need an active optical interferometer system was
developed. The Optical Control System (OCS), a part of the Fine Guidance System (FGS) is
located at the telescope focal plane, as are the scientific instruments; thereby, it is capable of
providing total optical system alignment knowledge at the precise location where the
incoming light wavefront is transferred to the scientific payload. This device views a star
through the telescope optics and senses wavefront aberrations resulting from primary optical
system distortions and total telescope assembly misalignments. This knowledge is transmitted
to the ground where it is analysed to determine what commands need to be generated to
correct any undesirable aberrations. Correction commands received by the HST can cause the
secondary mirror position to be adjusted for tip, tilt, decentre, and focus. The primary mirror
figure can als~ be corrected by properly commanding numerous push-pull actuators. The
structural design of the OTA was driven by the requirements for stiffness, high dimensional
stability, strength and lightweight. Metal is used for the main ring of the primary mirror since
this is also the major load-carrying member between the scientific instruments, the OTA, and
the SSM. However, the metering structure, which supports the secondary mirror at a precise
distance from the primary mirror and the remaining truss assembly is fabricated from
graphite epoxy. Using this material arrangement, with active and passive thermal control
methods, yields a structural system that satisfies loads and optical requirements across the
range of environmental conditions expected. To satisfy the 0.1 arc-second image resolution
requirement, the telescope line-of-sight must be held extremely stable relative to the object
under observation. In this telescope design, the larger portions of image quality limitations
were allocated, for practical reasons, to the optical systems. The pointing control system was
tasked to meet its requirements with only 10% of the total error budget available. This
established a 0.007 arc-second(RMS) line-of-sight stability requirement. The the0.01 arc-
second pointing requirement was derived from the 0.1 arc-second minimum SI aperture
requirement. Since a
star image of 0.1 arc-
seconds (telescope
resolution goal) will fill
a0.1 arc-second SI
aperture, the PCS was
designed to be capable
of pointing accurately
to 0.01 arc seconds.
Thus, the PCS can
locate the smallest
resolved image in the
centre of the smallest
Aperture and void significant signal 10sses. 4 The PCS depends on numerous state-of-the-art
attitude sensors which, due to individual limitations infield-of-view FOV), sensitivity, or the
accuracy, forced an overall system design that uses an architecture of individual acquisition
modes wherein one sensor or mode dominates until the pointing accuracy attained is within
the acquisition range of the next higher-order sensor or mode. The three sequential modes of
operation: (1) manoeuvring to the target area, (2) FGS guide star acquisition, and (3)
acquisition of the target star by the SI, will be described to explain this process achieves the
accuracies required
Target Star Acquisition The guide stars used by the FGS were pre-selected to enable the
capture of a target star (observation star) within the telescope’s FOV. The scientific
instrument can now observe the target star at its 66 F. S. Wojtalik aperture with its internal
scientific detectors. In some instances when it is desirable to position the target star at a
precise location within the SI apertures, it becomes necessary to take an image, either analyze
it on board or the ground and command the HST to perform a small correcting manoeuvre.
The HST pointing control
system uses four reaction
wheels for attitude control and
electromagnetic torques to
neutralize the effect of gravity
gradient torques on exaction
wheel momentum build-up.
SYSTEMS ENGINEERING
CONTRIBUTIONS
The HST program has benefited
from a well-organized systems engineering effort. This emphasis, intense in the past several
years, has been rewarded with many important issues being resolved in an optimum manner.
Some of the designs, methods and other contributions are considered to be technologically
significant. Several of the contributions are described in this section. Rate Gyro Assembly
(RGA) The RGA's are key to the successful operation of the HST pointing and control
system. Except for the performance improvement changes that will be described, the rate
sensor design is the same one used by NASA for the IUE and HEAO programs. define
pointing stability for the HST has been the principal force for improved gyro noise
performance. As the program evolved and data became more available on hardware and
vehicle structural characteristics, it became clear that a definite need existed to provide some
margin for fine pointing stability. Because of the increased importance placed on RGA noise
performance, the mechanical and electrical error sources within the assembly were
characterized by tests and evaluations. This action led to a thorough understanding of the
contributors to RGA noise, and a program of specific design modifications that improved
noise performance by at least an order of magnitude was accomplished. The predominant
noise source within the RGA was mechanical noise associated with the gyro wheel. This
mechanical source of error includes gas flow turbulence around the wheel and wheel hunting
about the synchronous motor speed. Each of these sources produced torques on the gyro float
mechanism that are interpreted by the electronics as rates sensed by the unit about its
sensitive input axis. The noise due to motor hunting was minimized by an active high gain
servo loop that sensed hunting and properly controlled electrical motor operation. Turbulence
with the gyro float mechanism was determined, through an elaborate test program where
special and unique test equipment was developed, to be the dominant noise-producing source.
Seismic noise, though not a contributor to the real RGA inherent noise-producing sources,
must be considered in characterizing the performance of this precision and extremely
sensitive device. To characterize the noise produced by the HST rate sensors and to test
design improvements that resulted, a singular and unique seismically quiet test site was used.
It was determined that a shroud, if included in the wheel design, would change the very
turbulent gas flow pattern within the float into a laminar flow, thus minimizing the noise
associated with this phenomenon. The wheel speed for the HST gyro is 19,200 RPM. Several
shroud designs were conceptualized, but through an analytical process, one was selected for
implementation (Fig. 3). With this shroud design, the measured noise level of the RGA has
been reduced by factors of 3.6 to 10.0 across the usable frequency range. Reaction Wheel
Isolation The four reaction wheel assemblies (RWA's) used in the HST are installed in pairs
within two separate compartments of the SSM. The spin axis of the paired assemblies sharing
a compartment is elevated!20 degrees from a plane that is normal to the telescope line-of-
sight. The planes
formed by the spin
axes of paired
assemblies are
separated by 90
degrees. Each RWA is
0.64 meters in
diameter, 0.50 meters
high and weighs 48
kilograms. At a wheel
speed of 3000 rpm, a
torque of 0.82 Nm is
produced. The
demanding pointing
and stability
requirements of the
HST led to a program
of quantization
wherein noise
contributing sources were identified, isolated, and characterized. The most significant
disturbances in the RWA's were found to be the axial forces generated by interactions of very
small imperfections in bearing inner and outer raceway and balls. Initial attempts at
eliminating these disturbances were concentrated on a bearing selection program. This effort
succeeded in reducing the forces present at beginning of life but did not eliminate the
concerns for unacceptable high forces that could be experienced as the bearings wear during
the mission. Additionally, as the HST orbit decays and aerodynamic drag necessitates higher
wheel speeds for pointing control, the real potential existed for higher disturbance forces
early in mission life. Faced with these data, designers strongly recommended the retention of
the precision bearing selection program but also advanced the design of a device that would
physically isolate the induced vibration of the RWA's from the vehicle structural dynamics.
Several isolation concepts were fabricated and tested before a final design was selected.
Included in the evaluation program were a viscoelastic polymer, a stranded and coiled steel
cable, and a purely viscous damping device. The isolator system that exhibited the best
repeatable performance characteristics was the viscous damping device. Some of the unique
design features that this concept has over the other options are:
(1) Extremely small stiction and Deadband,
(2) Linear operation over a very large dynamic range,
(3) is an effective damper for vibration input amplitudes as small as 5 Nanometers,
and (4) stiffness and damping remain constant over a dynamic amplitude range ratio of
100,000 to 1.
Figure 4 shows a cross-section view of the isolator. Figures 5 and 6 provide a comparison of
RWA induced instability without and with the isolator modification.
Spectrometers
Figure. The Laser Ablation Mass Spectrometer as it may Laser Ablation Mass Spectrometer
operate on the surface of an asteroid. The laser focuses a
pulse of high-intensity light, shown in red, at a small spot on Another example of space instrumentation is of
the rock surface. Some of the ions produced in the fireball an entirely new form—a miniature mass
enter the instrument and are deflected by the electric field
down to the microchannel plate detector (the gray annular spectrometer for use on the surface of comets,
disk at the front of the instrument). The camera at the top
asteroids, and planets. Planetary science from
lets the scientist see which rock grain has been targeted
space is about to enter the third stage of exploration. The first stage is gross reconnaissance. All of the
planets except Pluto have been visited by at least one spacecraft flyby, and spacecraft have flown past
four asteroids and one comet. We have seen the gross structure of these bodies.
The second stage is orbital missions to examine the structure of planetary bodies and begin the
process of understanding their origin and evolution. Venus, Earth, Mars, and Jupiter have already had
orbital missions to examine the shape of the surface, the structure, and the coarse composition of these
bodies.
the laser-ablated ions travel from the sample surface into the mass analyzer and are redirected in a
two-stage reflectron onto a dual microchannel plate (MCP) detector, arriving at a sequence of times
proportional to the square root of their mass-to-charge ratios, i.e., (m/z) 1/2
The APL-built NEAR spacecraft is about to orbit the asteroid 433 Eros for 1 year to examine it
closely. The third stage of exploration requires landed packages to examine portions of the surface in
great detail, e.g., the atomic and isotopic composition of the rocks, soils, and regolith. Atomic analysis
can reveal the mineral composition of the rocks, while isotopic analysis contains information about
the origin of those materials. For example, the ratio of neon isotopes 20Ne/22Ne in terrestrial rock
samples is very different from protosolar grains. These grains are believed to be pristine samples of
the original solar nebula and have been unchanged for billions of years. One example of a third-stage
exploration mission was the Mars Pathfinder. 6 The Sojourner rover carried an alpha, proton, X-ray
(APX) instrument to examine the rock composition. It was able to determine the rough atomic
composition of a few rocks on the Martian surface. APX instruments are limited, however, because
they only measure elements in the atomic number range from magnesium to iron, and they also have
atomic, not isotopic, resolution. For the last 3 three years, APL has been developing the Laser
Ablation Mass Spectrometer (Fig. above).
This instrument fires a very short laser pulse at the surface of a rock to vaporize and ionize a tiny
segment. Some of the ions enter a reflectron analyser where an electric field turns them around and
directs them into a detector. The properties of the electric field in the reflectron are designed so that
the time from the laser pulse until the ions reach the detector is independent of the energy of the ion
and depends only on its mass. The lightest ions (hydrogen) reach the detector first, followed by the
heavier ions, all the way to the highest masses for which the instrument is designed (~1000 amu).
Thus, just by watching the time history of ions hitting the detector, the full isotopic composition of the
sample is determined. The whole measurement takes only a few tens of microseconds. Laser ablation
mass spectrometry has been used in laboratories for a few years and is regarded as one of the most
sensitive tools for microscopic composition analysis. Laser ablation spectrometers are a natural
candidate for space instrumentation except that most are very large, not something that could be put
onto a Mars Pathfinder–sized lander. The instrument being developed at APL will be about the size of
a 1-L bottle and weigh less than 5 kg. Figure shows the isotopic composition of a meteoritic sample
measured by the APL mass spectrometer. This instrument will have another important feature, i.e., a
microscopic camera in the optical train to show the exact part of the sample that is being measured.
Since the spot being vaporized is less than 0.005 cm in diameter, it can be focused on individual
grains in the rock. With this capability, scientists need not work with just the average bulk
composition. They can examine the minerals in the rock individually. This is especially important for
breccias, rocks assembled from angular fragments broken off parent rocks that have been cemented
together into a composite rock. If the laser power is lowered somewhat, the molecules in the sample
can be ionized without breaking them apart. This can help understand the exact chemical form of the
minerals in rock or soil. It may also allow the instrument to search for organic compounds on the
surface of other planets.
dewar system. Each camera covers the same spectral band of 0.8 to 2.5 microns with different
magnifications Next the corrected image is relayed to a three-mirror field-dividing assembly,
which splits the light into three separate, second-stage optical paths. In addition to the field-
dividing mirror, each second stage optic uses a two-mirror relay set and a folding flat mirror.
The field-dividing mirrors are tipped to divide the light rays by almost 4.5 degrees. The tip
allows physical separation for the two-mirror relay sets for each camera and its FOV. The
curvature of each mirror allows the required degree of freedom to set the exit pupil at the cold
mask placed in front of the filter wheel of each camera. A corrected image is produced in the
centre of the Camera 1 field mirror. Its remaining mirrors are confocal parabolas with offset
axes to relay the image into the dewar with the correct magnification and minimal aberration.
Cameras 2 and 3 have different amounts of astigmatism because their fields are at different
off-axis points from Camera 1. To correct residual astigmatism, one of the off-axis relay
mirrors in Camera 3 is a hyperbola and one of the relay mirrors in Camera 2 is an oblate
ellipsoid. Camera 2 also allows a coronagraphic mode by placing a dark spot on its field-
dividing mirror. During this mode, the HST is maneuvered so that the star of observation falls
within the Camera 2 field dividing mirror and becomes occulted for coronagraphic
measurements. All the detectors are 256 x 256-pixel arrays of mercury, cadmium and
tellurium (HgCdTe) with 40-micron pixel-to-pixel spacing. An independent, cold filter wheel
is placed in front of each camera and is rotated by room-temperature motors placed on the
external access port of the dewar. A multilevel, flat-field illumination system corrects
detector onuniformities. The light source and associated electronics are located in the
electronics section at the rear of the instrument. IR energy is routed to the optical system
using a fiber bundle. The fiber bundle illuminates the rear of the corrector mirror, which is
partially transparent and fits the aperture from the fiber bundle. The backside of the element
is coarsely ground to produce a diffuse source. Three detector cables and three detector clock
cables route electrical signals from the cryogen tank to the hermetic connector at the vacuum
shell. The cables consist of small-diameter, stainless-steel wire mounted to a polymeric arrier
film. They are shielded to minimize noise and crosstalk between channels. (Shielding is an
aluminized polyester film incorporated into drain wires.) The cables also have low thermal
conductivity to minimize parasitic heat loads. In addition, two unshielded cables connect to
thermal sensors used during fill and for on-orbit monitoring. Besides processing signals from
and controlling the detectors, the electronics prepare data for transmission to the HST
computer, respond to ground commands through the HST and control operation of the
instrument. ICMOS uses an on-board 80386 microprocessor with 16 megabytes of memory
for instrument operation and data handling. Two systems are provided for redundancy. The
detector control electronics subsystem includes a microprocessor dedicated to operation of
the focal plane array assemblies. Two microprocessors are provided for redundancy.
The radiation energy emitted by the Sun in the form of solar radiation is one of the main
driving forces for Earth’s atmosphere. The existence of almost each and every cycle
(~biological as well as physical) on the Earth is because of the entire solar radiation spectrum
ranging from ultraviolet (UV) to infra-red (IR) region. The lower atmosphere near to the
Earth’s surface is well balanced by the visible and IR spectrum of solar radiation. Whereas
UV spectrum is equally important to control the middle and upper atmosphere of the Earth.
And to support the living process as well as photosynthesis, the visible spectrum is much
more essential than other spectrums of solar radiation. Eddy demonstrates that a small
variation in solar irradiance has a great effect on Earth’s atmosphere (Eddy, 1976). Later on,
this statement is supplemented by more recent research work. As evidence of the above
statement, Lean and Rind showed that global temperature at Earth’s surface is increased by
around 0.1°C with the variation of solar irradiance during solar cycle 23. So, solar radiation
has a very significant role in the life cycles of the Earth. The complete investigation of the
solar radiation signal also provides useful information regarding different astrophysical
phenomenology. In this context, Kopp demonstrates that the variation in solar irradiance is
well correlated with solar activity such as faculae and sunspot. So the measurement of solar
irradiance signal is a very significant prospect for researchers.The solar radiation signal is
measured in the form of either Total Solar Irradiance outside the Earth’s atmosphere [> Earth
radius (~6370 Km) + thickness of 99% air mass (~ 30Km)] or Global Solar Irradiance at the
Earth’s surface. To collect Total Solar Irradiance data, NASA placed some satellites like
UARS-SOLSTICE (1991 - 1999), SUSIM UARS (1991 - 2002) and SCOCE (2003 – Feb.,
2020) etc. But to understand the effect of solar irradiance on Earth’s atmosphere, Global
Solar Irradiance has equal importance with Total Solar Irradiance. Modelling of Global Solar
Irradiance data from measured Total Solar Irradiance information confronts lots of problems
due to blockage of up to 70% irradiance by the clouds and Earth’s atmosphere. So
measurement of Global Solar Irradiance data becomes an important research area for
researchers.
A conventional Pyranometer is mostly used to collect the long wave global solar
irradiance signal by utilizing a thermometric type sensor. Whereas the global solar irradiance
is measured by a radiometric type sensor, which collects shortwave radiation of the Sun.
Primarily, silicon photovoltaic cell is used as a radiometric type sensor to measure solar
irradiance and attains an accuracy of around ±13 W/m2. But those systems are expensive and
temperature-sensitive. Some researchers are attempted to compensate for this temperature
effect using first-class thermopile type thermal sensor but attained it partially. Photoresistor
type radiometric sensor is also used due to its lower cost. But it has a slower response
compared to others. Junction semiconductors like photodiode as well as phototransistor is
mostly used in radiometric application as it is sensitive to a broad range of wavelength
compare to other radiometric sensors. Radiometric type sensor have some traditional
shortcomings compared to conventional thermometric type sensors such as direct response
and temperature dependencies. Radiometric type pyranometer is differed by <± 3% from
thermoelectric type pyranometer in terms of cosine or directional response which has a
significant role on solar radiation measurement. The influence of ambient temperature on the
radiometric type sensor is also higher than that of for the thermoelectric type sensor.
But it has some other advantages over thermoelectric type sensors. The speed of
response for radiometric type sensor is very less around 10 µs compare to 1 – 10s for
thermoelectric type sensor (Coulson, 1975). The average measurement uncertainty for
thermoelectric and radiometric type sensors is around ±5% and ±2.4% respectively. So
radiometric type sensors are mostly used in the solar irradiance measurement system to make
it lower cost compare to conventional pyranometer.
Methodology:
The compatible and low-cost system is designed to collect the short wave ground solar
irradiance signal. The entire module is subdivided into two components named sensor station
and monitoring station. The sensor station collects the global solar irradiance signal at a
programmable sampling interval and represents it in 10 bit compressed digital format. The
block schematic diagram of the whole arrangement is shown in Fig. 7.1.
The sensor station is potable (12.5 cm × 11.5 cm × 16 cm) and automated in nature. This
section can be climbed on the rooftop of any building provided that the focal area has on
obstruction due to neighboured affairs. This section is materialized by sensor module, data
acquisition and transmission unit.
● Sensor Module:
The sensor module of the designed arrangement is actualized by a radiometric type sensor to
fascinate on the shortwave radiation of the Sun. The sensor is the fundamental component of
any solar irradiance measurement system. So selection of the sensor element has required
rigorous examination of commercially available radiometric sensing modules. The prime
characteristic of any solar irradiance measuring sensor is sensitivity. Regarding sensitivity, p-
n junction semiconductors diode, as well as transistor furnished more sensible to input
irradiance, compare to other radiometric sensors. To select the efficient sensor element in
terms of sensitivity (expressed in equation 1), characteristics like Responsivity (Amp/Watt)
and Irradiance sensible area (mm2) are examined with wide varieties of the photodiode as
well as a phototransistor.
( )
Output current deflection Amp
Sensitivity ( S )= (7.1)
Valueof input insolation producing deflection watt
2
m
Watt Amp
I OSD 5−5 T =500 × 0.15 ×5 ×10−6 m2=3.75 ×10−4 Amp=0.375 mA (7.3)
m 2
Watt
Watt Amp −6 2 −3
I S 9218−01=500 ×0.22 ×12.96 × 10 m =1.42× 10 Amp=1.42 mA (7.4)
m
2
Watt
Watt Amp −6 2 −3
I OSD 15−5 T =500 × 0.21 × 15× 10 m =1.57 ×10 Amp=1.57 mA (7.5)
m
2
Watt
Watt Amp
I BPW 21=500 × 0.34 ×7.34 × 10−6 m2=1.25× 10−3 Amp=1.25 mA (7.6)
m 2
Watt
Watt Amp −6 2 −3
I L 14G 2=500 × 6.25 × 16.04 ×10 m =50.12 ×10 Amp=50.12mA (7.7)
m
2
Watt
From Table 7.1 and 7.2, Phototransistor - L14G2 established lower cost per unit as well as
better response in sensitivity. So L14G2 is utilized as the sensor element for this formulated
arrangement. This phototransistor can be used in either common collector or common emitter
mode depending on the response required for a particular solar Irradiance measuring system.
The solar irradiance signal collected by sensor module is further represented in digital format
using 10 bit A/D converter. So linearity of the sensor element is very significant
characteristic to capture the solar irradiance signal. The output current of phototransistor is
directly proportional with the solar irradiance but the output voltage is affected by load. The
phototransistor with different load values are kept in a fully sunny day and their normalized
output response with input irradiance is shown in Fig. 7.2. It is observed that the output
voltage is flattening with increasing load value. The current design uses 0.1 Ω load resistance
due to its linear nature.
Gain= ( RG )
49.4 K
+1
Fig. 7.2. Normalized response of the phototransistor with different load values.
In this current design, RG has been chosen as 650Ω to provide a gain factor of around 77.
After amplification, a low pass filter circuit is used to minimize the possible interference at
the ADC input. The current design is chosen the cut off frequency of that low pass filter at
around 10Hz with R = 6.8KΩ and C = 2.2µF.
1 1
Cut off frequency= = =10.64 Hz
2 πRC 2 ×3.14 × 6.8 ×103 ×2.2 ×10−6
The amplified and filtered data is fed to the microcontroller for further processing.
ATmega32L is chosen in this current design due to its lower power consumption feature.
Microcontroller converts the input signal into their digital equivalent data using in-build A/D
Converter. The current design considered the internal reference voltage for A/D conversion
which makes the system resolution around 1.40 W/m2. The digitized data is compressed
using the DMBRLE technique (discussed in the next section) to enhance the efficiency of
memory as well as transmission channel (Roy et al., 2015). This compression method is
lossless in nature and provides a compression ratio up to 74.56%. The compressed data is
temporarily stored in the internal memory until received any command from the monitoring
station. The compressed characters of the sensor station are communicated to the monitoring
station in a packet format using the ZigBee Pro module at a 9600 baud rate. Upon completion
of one data transmission session, internal memory is refreshed for the next session.
Compressed data are transmitted in the form of a data packet consisting of a header byte and
an error checksum byte for error detection in the receiver end. The data packet format is
shown in Fig. 7.4.
It can observe that the daily solar irradiance data comprises of several successive same values during
the late afternoon to early morning time as shown in Fig. 7.5. Also, the variation has very low
frequencies changes during early morning, mid-noon and late afternoon time and high-frequency
changes during the rest of the day. These factors are playing the main role to develop the compression
algorithm. The main objective is to compress those successive same sample elements as well as low-
frequency changes region. This developed algorithm is actualized based on jointly First Sample
difference (FSD) and S-Run Length Encoding (S-RLE) techniques. FSD is modified and
consecutively employed to concentrate at the low frequencies region and S-RLE at region having
successive same sample values S. The logic flow diagram of the developed compression algorithm is
shown in Fig. 7.6.
Fig. 5: Typical daily global solar irradiance data.
To achieve minimum latency between two transceiver devices, one frame comprised of one
original sample along with mean value [K] followed by 256 FSD elements is constructed.
First Sample Different (FSD) elements of the original 10 bit quantized samples are computed
by:
The elements of the FSD array have most of the data between ±10 region and rest within
±100 region, are moved for the magnitude and sign byte encoding (Gupta and Mitra, 2011).
In Sign encoding technique (Gupta and Mitra, 2011), one sign byte (which stores the sign
information of the next eight XFSD elements) is present followed by eight magnitude bytes.
The D7 bit of sign byte represents the sign information of 1st element among the magnitude
eight elements and similarly D0 represent the 8th element as described by:
In magnitude encoding technique, two small FSD values (± 9) are represented by only one
packed byte format (by nibble combination) and others by byte combination (with an offset
of 100). So the rules for Magnitude Encoding Technique are given as:
[Where x (i) represent the first difference element, y (j) represent the magnitude encoded
byte.]
To enhance compression performance to a certain level, a bias value has been considered to
winnow out the negative data as well as sign byte encoding techniques. The bias value [K] is
figured out by acquiring the mean value from one data frame. The magnitude encoding
techniques also have to be modified to catch best performance result. In case of modified
magnitude encoding technique, FSD values from [K-5] to [K+5] are only eligible for nibble
combination and byte combination for the rest.
Lastly S-RLE is applied to further compress the consecutive same sample difference [S] deals
with data stream by only two bytes. The first byte is fixed as 00 (indicates the start of a same
sample value sequence) and second byte is equal to the number of consecutives S element
(indicates the number of consecutive same element). So the rules for Run Length Encoding
Technique are given as:
[Where y (j) is the array after RLE and x (i) is before RLE.]
Table 3: Summarized Compression Algorithm
A key requirement of any radiometric type solar radiation measurement system is that its
output does not change with the temperature. From the datasheet of the considered
phototransistor, it is observed that the output current has a temperature dependence around
approximately 1%/°C. To minimize this deviation, a temperature control system is utilized
with this measurement system. This control system uses an integrated temperature IC
(LM35) to measure the internal temperature and a Peltier module (TEC1-12706) to control
that at a defined value. The current design maintains the internal temperature maximum at
25°C and the microcontroller sends the control signal to the Peltier module through a relay
made power stage as shown in Fig. 3. To maintain the internal temperature more effectively,
the outside wall of the measurement system is made up of polyethene of 10mm thickness.
The microcontroller also takes the solar irradiance value as an input to stop the working of
this temperature control system during nighttime to optimize the energy consumption.
Fig. 7.7. Electro-Mechanical arrangement to protect the sensor module from environmental hazards
The monitoring station has application software that can be operated by any unskilled
personnel. This application software provides a Graphical User Interface to control the
sensor station for acquiring and visualizing the solar irradiance data as shown in Fig 8. Once
the “Data Acquisition” button is pressed in the graphical end, it established the
communication channel between the monitoring station and sensor station using
handshaking signals followed by sending a command signal to collect compressed solar
irradiance data. The rate of data reliability and data acceptance is checked for the
compressed data by formulating bit rate error and packet rate error respectively. Upon
receiving reliable and accepted data it will pop up a “Data Acquisition Completed”
message. Once the “Data Visualization” button is pressed in the graphical end, it
decompresses those compressed characters and represents original solar irradiance data
samples. The decompression algorithm was performed by reverse logic of the DMBREL
compression algorithm (described in the next section.
Number of inaccurate bits
BRE=
Total number of bits received
Fig. 8. Graphical User Interface of the monitoring station to acquire and visualize global solar
irradiance data
● Decompression Algorithm:
The decompression technique is performed to extract the original data stream from the compressed
character. It is actualized by decompression of S-RLE technique followed by magnitude encoding and
FSD array. Whenever a data combination of 00 followed by a number is received, successive same
samples are retrieved from that combination. Similarly, data between 00 - 99 is preceding for
extraction from nibble combination and data greater than 100 for byte combination. Finally, original
data stream is computed by taking the values of FSD elements, mean value with the original sample.
The summarized rules for decompression are given in Table4.
One of the fundamental characteristics of any solar irradiance measurement system is the cosine or
directional response. The output response of a reliable solar irradiance measurement system should
not oscillate with an axial variation. By Lambertian or directional response testing, the degree of
precision is calculated to rectify solar irradiance into its normal component. To compute this
characteristic, sunlight is considered as an ideal source but it is very difficult for continuous
measurement of solar irradiance data due to lack of clear sun periods. The system with a single
phototransistor shows a normalized directional error of around 13.96% due to a narrow reception
angle. To enhance this response, five phototransistors are used in an arrangement such that four
phototransistors are placed at an equal distance equivalent to their reception angle from the central
one on a hemispherical surface as shown in Fig. 10 & 11(a). Among four sensors, two are connected
in the east-west direction for the zenith component correction and the other two in the north-south
direction for azimuth component correction of the Sun. All the phototransistors are connected in a
parallel connection to collect the overall current from them. By using five sensor arrangements, the
system enhances the total reception angle therefore the overall directional response. Fig. 11(b) shows
the directional response of this arrangement which mostly follows the ideal directional response
curve. The overall error for this arrangement is around 6.77% up to 90° which is better than a single
sensor. Table 7.5 represents the comparative study between developed arrangements and other solar
irradiance measurement systems in terms of directional response error.
R ( Z ) + R(−Z )
−ZERO( Z )
2
∆ directional= × 100 %(17)
[ R ( 0 )+ R (0 )
]
0 0
−ZERO ( Z ) cos (Z)
2
Where R (00): Pyranometer response at 00 angle of incident, R (Z): Pyranometer response at Z 0 angle
of incident and ZERO (Z): Pyranometer response on dark signal at Z 0 angle of incident.
Fig. 7.10. Apparatus is used to determine the directional response of the developed system.
Fig. 11. (a) Arrangement with five phototransistors; (b) Normalized directional response of the
developed system.
Table 5: Comparison of developed system with other measurement system in terms of directional
response error
Solar Irradiance Measurement System Directional Response Error
PSP (Eppley Lab) (Dunn et al., 2012) < ± 1 % up to 0 - 70º
CM 11 (Kipp & Zonen) (Dunn et al., 2012) < ± 3 % up to 70 - 80º
< ± 2 % up to 0 - 70º
STAR (Weahertronics) (Dunn et al., 2012)
< ± 5 % up to 70 - 80º
± 2 % up to 60º
RTP (Maetincz et al., 2009)
± 6 % up to 80º
± 2.04 % up to 60º
Developed Arrangement ± 4.47 % up to 80º
± 6.77 % up to 90º
Calibration:
The main objective of the calibration process is to make the system compatible to represent
its output in terms of global solar irradiance signal. As per the Indian standard (ISO 9847:
1992), calibration can be done by either the outdoor or indoor calibration method. The
outdoor calibration process is carried out by taking the comparison between developed
arrangement with a reference pyranometer at the same sampling rate of 5 minutes. The
pyranometer from Delta Ohm (LP PYRA 10) is considered here as the reference one, which
is categorized as a “secondary standard” according to ISO 9060:1990 and adopted by World
Meteorological Organization (WMO).
First, both the instruments are placed on the same location (Latitude: 22º 34' N, Longitude:
88º 24' E, Elevation: 11m) at a common tilt angle. Instantaneous readings from both
instruments are taken at 5 minute time intervals with cloudless stable sky situation over 5
days consecutively. For each sampling interval Calibration Factor [CF (j)] is calculated and
finally, the calibration multiplier [CM] is computed for all sampling intervals.
n
CF S ∑ ❑ V SP
i=1
CF ( j ) = n
∑ ❑ V DS
i=1
m
1
CM = ∑ ❑CF ( j )
m j=1
After calibrating the developed system, the prime importance is to assure the acceptance
level by computing the mean absolute error and coefficient of determination. Response of
the developed system after calibration along with standard pyranometer (maker: Delta
Ohm, model: LP PYRA 10) for a single day is represented in Fig. 7.12. The developed
arrangement depicts the mean absolute error and coefficient of determination is 1.27 and
0.999 respectively which justifies the calibration process. Table 7.6 represent the comparative
study between developed arrangement and other solar irradiance measurement system in
terms of mean absolute error.
Fig. 12. Response of the calibrated develop system with standard pyranometer for a single day.
Firstly, we examined our developed compression algorithm on the simulation platform for
evaluating the performance index. Afterwards we executed that algorithm by using a low
power microcontroller ATmega32L. Here algorithm used yearly global solar irradiance data
sited by the National Renewable Energy Laboratory [NERL] with a sampling rate of 1 hour
from different locations across India. To figure out the compression performance, we used
Compression Ratio (CR) which is defined as:
Compression Ratio=100× ¿
This compression algorithm was modified from Simple FSD with no bias to FSD with bias
and S-RLE by four different intermediate stages to achieve the best performance and
represented in Table 7.7. It has been observed their stepwise effect in the CR is a large non-
linear in nature for the same data set. Obviously, First Sample Difference with Bias and S-
RLE afford the better CR than others.
The quality of the recovered signal is demonstrated in Table 7.8 by computing the
Normalized Root Mean Square Error (NRMES) of the reconstructed signal.
NRMES=
√ mean((x− x^ )¿ ¿ 2) ¿
( x ) −(x )
Table 7.8 represents the performance index of the developed compression algorithm among
some other compression algorithms, which are either lossless or lossy in nature. The
developed compression algorithm is rigorously monitored with global solar irradiance data
of different geographical locations across India. The performance of the developed
compression algorithm is shown in Fig. 7.13 for the global solar irradiance data of the
Kolkata region.
Fig 7.13: Comparison between original and reconstructed signal after compression-decompression
Table 7: Comparative study between developed algorithm and other compression algorithms.
√ ( )
❑ 2
❑ ∑ ❑ X GSI ,DS ( j )
∑❑ X GSI , DS ( j )− j
m
j
m−1
Precision= ❑
∑ ❑ X GSI , DS ( j)
j
m
Where X GSI , DS and m represents the solar irradiance value measured by developed system
and numbers of measurement values respectively.
Fig. 14. Apparatus is used to determine the precision of the developed system.
Some random daily solar irradiance values at 30 min intervals in different weather
conditions are represented in Fig. 15. These data was taken by the developed system as well
as standard pyranometer (maker: Delta Ohm, model: LP PYRA 10) at our University campus
(Latitude: 22º 34' N, Longitude: 88º 24' E, Elevation: 11m).
The short term performance of the developed system is measured by computing statistical
errors. The statistical error in terms of mean bias error and root mean square error is worked
out by those collected data using following equations.
❑
∑ ❑( X GSI , DS ( j)−X GSI , SP ( j) )
j
Mean bias error=
m
√
❑
∑ ❑ ( X GSI , DS ( j)− X GSI ,SP ( j)) 2
j
Root mean square error=
m
Where X GSI , DS and X GSI , SP represents the solar irradiance value measured by developed
system and standard pyranometer respectively.
Fig. 15: Random daily solar irradiance values at 30 min interval in different weather condition. Blue
line represents the reading of the standard pyranometer and red dashed line indicates the response of
the developed system at a common location.
The short term performance of the developed system is measured by computing statistical
errors. The statistical error in terms of mean bias error and root mean square error is worked
out by using following equations.
❑
∑ ❑( X GSI , DS ( j)−X GSI , SP ( j) )
j
Mean bias error=
m
√
❑
∑ ❑ ( X GSI , DS ( j)− X GSI ,SP ( j)) 2
j
Root mean square error=
m
Where X GSI , DS and X GSI , SP represents the solar irradiance value measured by developed
system and standard pyranometer respectively.
The ideal measurement system expressed both mean bias error and root mean square error
value is equal to zero. So, lower values for mean bias error and root mean square error are
desired for an effective measurement system. The developed system depicts the value of
mean bias error and root mean square error is -0.2819 W/m 2 and 2.40 W/m2 respectively,
which are quite acceptable for solar irradiance measurement systems. Finally, the confidence
level of the developed system is calculated by the statistic parameter. Table 7.10 represents
the statistical performance of the developed system compare with other systems.
[ ]
1
( m−1 ) MBE2 2
t−statistic=
( RMSE 2−MBE2 )
[Where X GSI , DS ( j) = Response of developed system at j th time interval, X GSI , SP ( j)= Response
of standard pyranometer at jth time interval and m = number of sampling time interval]
Table 10: Statistical performance of the developed system compare with other systems.
Solar Irradiance Measuring Mean bias error Root mean square error t-statistic
System (W/m2) (W/m2)
RMP001 (Medugu et al., 2010) -0.19 0.72 1.04
RMP003 (Burari et al., 2010) 12.59 43.60 1
Developed Arrangement -0.2819 2.40 1.99
8.1 Introduction:
The Sun plays a pivotal role to help us understand the remaining of the astronomical universe as it the
Sun has close proximity to the Earth which enables us to study its surface and all of its activity in
exquisite detail. Hence, it acts as a paradigm of sun-like stars.
Studying and modelling diverse manifestations of the Sun is not solely for the academic purpose for
increasing the extent of our understanding of the Sun’s nature but also in comprehending the Sun-
Earth connection. The solar explosive and transient events emerging from the sun’s corona eject high
energetic particles into interplanetary space which if aimed towards the Earth present a substantial
danger to the very existence of life. Hence, the need to study and understand these energetic events is
of prime importance in view of Space Weather. ‘Space weather describes the dynamic and highly
variable environmental conditions on the Sun, in the interplanetary medium and in the ionosphere-
magnetosphere system to the ground (Baker, 2005; Singh et al., 2010). The aim of space-weather
studies is to predict solar variability and to understand its impact on life on Earth (Strong et al., 2012).
The solar energetic events significantly alter the Earth’s radiation environment by prompt heating
(Lean, 1997) and is detrimental to the scientific instrumentation at the spacecrafts and payload
(McKenna-Lawlor,2008). Furthermore, the shock waves in association with solar energetic events
play a significant role in compressing the Earth’s magnetosphere and triggering the geomagnetic
storms (Schwenn, 2005) in the magnetosphere and ionosphere. The solar radio bursts linked with
solar activity leads to disruption of the satellite-to-ground communication and radar system
(Messerotti, 2008) along with GPS navigation system failures (Chen,2005). Radio signal detection
systems are very advantageous in solar observation for understanding the solar eruptive events such as
solar flares and CMEs in the solar atmosphere using the linked radio bursts. Sun is a highly variable
source and detection of radio signatures of the Sun offers the best probable approach for probing the
science linked with such type of events. Transient events of the Sun can occur across a broad
frequency range (≈10 kHz − 10 GHz), the majority of which are observable with good contrast at
frequencies 500 MHz (Benz, 1993) from ground-based instruments. Emission from the
Sun at radio frequency originates at a various heliocentric distance (r) in the solar atmosphere owing
to the intrinsic electron density gradient. The burst’s structure alters from one frequency to another
due to the above characteristics. Hence, for obtaining data of radio emission in relation to flares and
CMEs, continuous monitoring across a broad frequency span is required. To suffice this purpose, a
variety of radio telescopes have been developed utilizing the antenna (depending on the concept of
transmission lines) as the elementary receiving component.
Antenna plays an essential role in any system where the communications are wireless in nature. The
definition of Antennas in accordance with IEEE standard is, “antennas are defined as a means that can
radiate or receive radio waves”. The signal from the transmitter side is transmitted via space using a
transmitting antenna in the form of electromagnetic energy which is captured by the receiver side
antenna or vice-versa. This in turn induces a voltage in the antenna (normally a conductor). The rf
voltages induced in the receiving antenna are then passed to the receiving device which then converts
the transmitted rf information back to its original form. Thus, the antenna can be considered as a
transducer that is responsible for converting a guided electromagnetic wave to free space waves and
vice-versa. In general, any piece of structure or conducting wire act as an antenna, however, the
radiation characteristics depends in accordance with the dimension of the structure used. Antennas are
the elementary receiving component of a radio telescope and can come in different types as well as
dimensions. According to the reciprocity theorem, the characteristics of an antenna when it is
transmitting should precisely match when it is used as receiving antenna.
These types of antennas may be more familiar to us today than in the past owing to the
increase in the requirement of more advanced forms of antennas as well as the use of high
frequencies. This type of antenna has found its use in spacecraft and aircraft applications as
they can be very easily flush-mounted on the spacecraft or aircraft skin. Additionally, the
aperture antennas can be coated with a dielectric material for protecting them in hazardous
conditions of the environment. Various type of aperture antenna has been shown in Figure
8.2
Microstrip Antenna:
This type of antenna was in demand during the 1970s owing to its spaceborne application. Today they
are utilized in commercial and government applications. The basic structure of the microstrip antenna
comprises of metallization area mounted over a grounded dielectric substrate and fed against the
ground at a suitable position. The metallization area can take various forms. But the circular and
rectangular patch is most favoured because with ease they can be fabricated and analyzed and has
good radiation characteristics. Because of the various advantages associated with this type of antenna
such as simple and inexpensive fabrication utilizing printed circuit technology, can be conformed to a
nonplanar and planar surface, mechanically robust when installed on rigid surfaces and very versatile
in terms of performance characteristics make it suitable for using it on the surface of spacecraft,
satellites, high-performance aircraft, mobile devices and even in cars.
Figure 8.3 Different types of Microstrip antenna (source: Balanis, C. A. Antenna theory: analysis and design. John
Wiley & Sons,4th edition)
8.2.3 Reflector Antennas: The success in the exploration of outer space triggered the expansion
in the concept of the antenna theory. The necessity to communicate across a great distance leads to the
development of sophisticated antennas that can transmit and receive signals travelling millions of
miles. That purpose gave rise to parabolic reflector type antennas. This type of antenna has been
constructed with large diameters for achieving high gain which is needed when transmitting or
receiving signals over millions of miles of travel. A typical parabolic reflector has been shown in
Figure.8.4.
8.2.4
8.2.5 Lens Antenna:
Lens are mainly utilized for collimating the incident divergent signals so as to prevent it
from scattering in different directions. Proper geometrical configuration along with the
selection of suitable lens material, the arrangement can alter different divergent energy
forms into plane waves. Similar to the parabolic reflectors, the lens antenna finds its use in
high-frequency applications. For low-frequency operations, the dimensions, as well as the
Figure 8.4 Typical configuration of Reflector type antenna (source: Balanis, C. A. Antenna theory: analysis
th
weight of the lens antenna, become very large. These types of antennas are categorized
depending on the geometrical shape and the material used for their construction. A typical
lens antenna has been shown in Figure 8.5.
Figure 8.5 Typical configuration of Lens type antenna (source: Balanis, C. A. Antenna theory: analysis and design.
John Wiley & Sons,4th edition)
The structure linked with the transition region between guided as well as free-space wave or vice-
versa is termed as an antenna. While the device is responsible for either guiding or transmitting radio-
frequency from one place to other. In general, it is tactical that the energy transmission should take
place with less attenuation, radiation and heat losses as possible. This is in turn indicate that when the
energy transmission is being done from one place to another, it should either be restricted to the
transmission line or should be bounded nearly to it. Hence, the transmitted wave through the line is
dimensional which means that it adheres to the line rather than spreading out in space. The infinite
lossless transmission line creates a consistent travelling wave across the line when a generator is
connected to it. The outgoing waves are reflected when the line is short-circuited thereby generating a
standing wave on the line owing to the interference amidst the reflected and outgoing waves. A
standing wave comprises of local concentration of energy. When the reflected wave is equivalent to
the outgoing wave then we achieve a pure standing wave. The concentration of energy for such type
of wave fluctuates from completely electric to completely magnetic and back twice/cycle. This type of
behaviour is observed in a resonant circuit or in other words it is called a resonator. Hence the term
resonator is generally used for devices with reserved energy concentrations that are large in
comparison to the net energy flow/cycle. Hence, the antenna is responsible for receiving or
transmitting energy, transmission lines guide energy and the resonators are responsible for storing
energy. The guided wave that travels across the transmission line opens out as depicted in Figure 8.7
will radiate as a free-space wave.
Figure 8.7 Generator, transmission line, antenna and separation of energy as free space wave (source: Kraus, J. D., &
Marhefka, R. J. Antennas for all Applications; 2nd edition)
The wave across the transmission line is a plane wave and the free-space wave coming out are a
spherical expanding wave. The energy is guided as a plane wave having low loss when the energy is
travelling across the uniform part of the line, such that the distance between the wires is a small part
of the wavelength. However, when the transmission line separation reaches a wavelength or more, the
wave radiates out such that the open outline behave as an antenna that initiates the free space wave.
The current traveling across the transmission line terminates when flowing out of it but the associated
field continues to move.
8.4 Antenna Parameters :
For describing the performance of an antenna, knowledge of various performance parameters is
necessary in order to determine whether or not the designed antenna is suitable for the required
application. In this section, some of the important antenna parameters are being discussed which is
applicable for all types of antennas.
8.4.1 Input Impedance:
(a) (b)
The ratio of relevant components of electric to the magnetic field at a point, or ratio of voltage to
current at a pair of terminals defines the antenna input impedance.
Input impedance ( Z a) of the antenna (under no-load condition) comprised of real and imaginary part
which is written as,
Z a=Ra + j X a (8.1)
where, Ra : input resistance, X a: input reactance
Due to reciprocity, the antenna impedance is similar to the transmitting and receiving operations. Ra
denotes dissipation that takes place in two ways. One part is when power leaves the antenna and does
not return (i.e. radiation) while another part is some ohmic losses similar to lumped resistor. X a is the
power reserved in the antennae near field.
Generally, the resistive
Figure 8.8part comprised
(a) When antenna of 2-components,
is transmitting the wave (b) Thevenin equivalent circuit
Ra =Rr + R L (8.2)
where, Rr : radiation resistance, R L: loss resistance
Assuming that the antenna is connected to a source having internal impedance ( Z g)
Z g=R g + j X g (8.3)
then Figure 3.8 (a) is reduced to the equivalent circuit as shown in Figure 3.8(b). For finding the
amount of power supplied to Rr for radiation as well as the heat dissipation in R L we calculate the
power supplied to the antenna in order to radiate which is given by,
1 2
Pr = R r|I g| (8.4)
2
As well as heat dissipation is given by,
1 2
P L= R L|I g|
2
The rest of the power is the amount of heat dissipation on internal impedance of source
excitation which is given by,
2
|V g|
Pg =
2
The max power transfer occurs when there is proper match of the antenna
R g=Rr + R L
X a=−X g
Given, antenna is joined to driving circuit through a transmission line having characteristics
impedance Z o, then there should be a matching of characteristics impedance between the
antenna and transmission line.
Z a=Z o ; Z o=Rr + R L ; X a=0
Z¿ indicates internal impedance when faced towards the terminating side of a receiving
antenna. Generally, Z¿ ≠ Z a. For modelling the equivalent circuit of a receiving antenna, the
internal impedance is utilized similar to when modelling the equivalent circuit of a
transmitting antenna using input impedance.
8.4.2 Reflection Coefficient:
The reflection coefficient of the transmitting or receiving antenna is given by,
Z a−Z o
Γ= (for transmitting antenna )
Za+ Zo
Z a−Z ¿
Γ= (for receiving antenna )
Za+ Z¿
ρ is a dimensionless parameter which can either be calculated and measured. The magnitude
of ρ is between 0 and 1. When there is a mismatch between the transmission line
characteristics impedance as well as transmitting antenna input impedance ( Z 0 ≠ Z a ), the
return loss occurs due to the reflection of the wave at the terminals of the antenna. When
expressed in dB, the reflection coefficient is a negative value.
8.4.3 Return Loss:
The return loss is the amount of signal which is reflected back when the impedance does not
match expressed as,
Return loss= -20 log|Γ| (dB)
Higher the value of the return loss more power will be delivered to the antenna.
8.4.4 Voltage Standing Wave Ratio:
In order to provide the antenna with maximum power, the impedance of the transmitting or
receiving antenna should match with the transmission line characteristics impedance. Hence,
VSWR is the parameter that numerically explains how well the antenna impedance matched
with that of the transmission line it is connected to. VSWR is defined as,
1+|Γ|
VSWR=
1−|Γ|
and is a function of the reflection coefficient. VSWR is a positive and real number for an
antenna having the minimal value of 1.0, (an ideal condition where the antenna does not
reflect any power). The smaller the value of VSWR, the better is the matching of the
transmission line with that of antenna impedance, hence more power will be delivered to the
antenna.
8.4.5 Radiation Pattern:
The antenna or radiation pattern is a graphical representation of a mathematical function of
antenna radiation properties as a function of spherical coordinates. For the majority of the
instances, antenna pattern is obtained from the far-field region, which is denoted as a function
of directional coordinates. 3-D antenna patterns can be estimated using a spherical coordinate
system denoting the radiation power strength in the far-field sphere enclosing the antenna.
Hence, x-y plane (φ measurement when θ=90o ) represents the azimuthal plane which
contains the magnetic field vector (H-plane). While the x-z plane (θ measurement when
o
φ=0 ) represents the elevation plane which contains the electric field vector (E-plane). The H
and E plane represents the direction of maximum radiation. Figure 8.9 shows a 3-D radiation
pattern plot of the half wave dipole.
Figure 8.9 A typical 3-D antenna pattern plot of the half wave dipole (source: Rahman, I., & Aftab Uddin Tonmoy, S.
(2014). Designing Of Log Periodic Dipole Antenna (Lpda) And It’s Performance Analysis)
From Figure 8.9 it can be seen that the maximum radiation is at θ=90o plane or for any φ
values across the azimuthal plane. At θ=0o and 180 o, the nulls in the radiation pattern
occurs (i.e., across the z-axis, the end of dipole). Figure 8.10 depicts the 2-D antenna pattern
which is plotted on a polar plot for changing θ at φ=0 o and changing φ at θ=90o
respectively. This has been depicted in Figure 8.10.
Figure 8.10 A typical 2-D polar plot of the half wave dipole (source: Rahman, I., & Aftab Uddin Tonmoy, S. (2014).
Designing Of Log Periodic Dipole Antenna (Lpda) And It’s Performance Analysis)
In fact, Figure 8.10 is just the two cuts of the radiation pattern and the 3-D antenna radiation
pattern can be easily illustrated from these 2-dimensional polar plots. The models as well as
the patterns depict the parameters of half-wave dipole which is nearly regarded as an
omnidirectional radiator. the isotropic source is only considered as the true omnidirectional
radiator which only exists theoretically. Hence, an isotropic radiator is defined as a
hypothetical lossless antenna that emits radiation equally in all directions.
8.4.6 Radiation Pattern Lobes And Beamwidth:
Different sections of the antenna pattern are termed as lobes. The lobes can be further
classified as major or main lobes, side lobes, minor and back lobes. A radiation lobe is part of
the antenna’s radiation pattern which is circumscribed by regions of rather weak radiation
intensity.
The major lobe (or main beam) represents the direction across which the field strength is
maximum hence termed as the direction of maximum radiation. The direction of the major
lobe denotes the antenna directivity. The radiation lobe apart from the main lobe is the
minor lobes which are comprised of the side as well as back lobes. Back lobes occur exactly
in opposite direction to the main lobe. Both sides as well as the back lobes denote the
radiation is the undesirable direction and hence should be reduced as much as possible.
Linked with the antenna’s radiation pattern is the beamwidth which is the separation angle
between two equal points opposite to the antenna pattern maximum. The number of
beamwidths exists in the radiation pattern of the antenna of which the Half Power
Beamwidth (HPBW) is most broadly utilized. The HPBW is the separation angle between
the direction where the field strength decreases to 1/√ 2 of its max value.
Figure 8.11 Polar form representation of Radiation Lobes and beamwidths of amplitude pattern of an antenna
(source: Balanis, C. A. Antenna theory: analysis and design. John Wiley & Sons,4th edition)
As the power density of wave is directly related to the square of the electric filed, thus the
power density decreases to its max value when there is the reduction of an electric field to its
max value. In other words, a reduction of -3 dB in power density. Hence, the HPBW is also
known as 3-dB beamwidth. The other crucial beamwidth is the First-Null Beamwidth
(FNBW) which is defined as the angle of separation between the first nulls of the radiation
pattern.
The local maxima in the antenna pattern are termed as the side-lobes of the antenna pattern.
In ideal case, the antenna should radiate across the direction of the major beam but the
presence of side-lobes denotes the power leakage in unwanted directions. The side lobes are
normally an undesirable parameter when the radiation pattern of the antenna is concerned.
2
U ¿) = U m|F (θ , φ)|
2
Where, Gmdenotes maximum radiation intensity and |F (θ , φ)| is the antenna power pattern
which is normalized to max value of unity. Integration of radiative intensity across all angles
rounding the antenna gives the Total radiated power
2
P=∬ U ¿) dΩ = U m ∬ |F (θ , φ)| dΩ
Gavg is the average radiation intensity over 4 π solid angle and is given as,
1 P
U avg = ∬ U ¿) dΩ = )
4π 4π
Hence, we can write Eq. (8.15) as
U (θ , φ)
4 πU (θ , φ)
D¿) = P =
P
4π
8.4.8 Antenna Efficiency:
Due to ohmic losses on the surface of the antenna, a portion of the power given to the
antenna terminals get lost due to the antenna heating. As a result, the total power given to
the antenna is not radiated. Thus, antenna efficiency is expressed as
power radiated by the antenna P P
a eff =¿ antenna¿ = =
the total power supplied ¿ P¿ P+ P o
Where, P¿= input power= power supplied to the antenna; Po = power dissipation owing to
ohmic loss on the antenna; P= radiated power
8.4.9 Antenna Gain:
Another useful parameter describing the antenna performance is the gain. While, antenna
gain has close association with the directivity, it also considers the directional capabilities as
well as the antenna efficiency. The antenna gain (for a specified direction) is defined
expressed as,
radiation intensity 4 πU (θ , φ)
Gain= 4 π =
total power supplied P¿
8.4.10 Bandwidth:
The frequency range within which the functioning of the antenna in relation to some
antenna parameters agrees with the specified standards gives the antenna bandwidth. The
bandwidth is regarded as the frequency range on either side of the central frequency where
the characteristics of the antenna have an acceptable value corresponding to the central
frequency. In simple words, when the signal is transmitted and received, it is done over a
frequency range. This specific frequency range is allocated to a particular signal such that
the other signals do not intrude when the signal is transmitted.
8.4.11 Polarization Of Antenna:
Antenna polarization for a particular direction is defined as wave polarization radiated (or
transmitted) by the antenna in the far-field region. Polarization can be categorized as linear,
circular and elliptical. If the electric field vector has movement both in forward and
backward direction in a line then it is linearly polarized. If the electric field follows an
elliptical pattern, hence the field is said to have elliptical polarization. If the electric field
stays uniform in length but traces a circular path is said to be circularly polarized. This has
been depicted in Figure 8.12.
Figure 8.12 (a) Linear Polarization (b)
Circular polarization (c) Elliptical
polarization (Source:
https://nptel.ac.in/courses/117/107/1
17107035/)
Figure 8.13 Schematic of Log Periodic Dipole Antenna (LPDA) (source: Rahman, I., & Aftab Uddin Tonmoy, S. (2014).
Designing Of Log Periodic Dipole Antenna (Lpda) And It’s Performance Analysis)
In the current dissertation, an attempt has been made to design the LPD Antenna (receiving
element) as a first step towards the designing of a solar radio burst detection system. A Log
Periodic Antenna is a system of driven elements (dipoles) that are stacked together as shown
in Figure 8.13. The LPDA’s are constructed for operating across a wide band of frequencies.
The benefit of using Log periodic structures is that they display essentially stable antenna
characteristics across the frequency band. Not all elements in the antenna system are
operational for a specific frequency but the design is such that the active region switches
through the elements when the operating frequency changes. There are different types of
log-periodic structures such as dipole, trapezoidal, planar, zig-zag, slot and V types.
However, the most favoured by amateurs are the log-periodic dipole arrays because of its
stable characteristics across the operating frequency range. They are made to operate over a
span of frequency depending on the design parameters. Across the desired range, the
electrical characteristics such as gain, front-to-back ratio, feed point impedance and so forth
stay approximately constant. But the same is not true for other types of antennas. For
example, the Yagi-UDA or quad antennas' front-to-back ratio or gain or both alters when the
operative frequency deviates from its optimum design frequency which also leads to an
increase of SWR. Even the terminated type antenna suffers a significant alteration in gain
when the operative frequency deviates.
In Figure 8.13 it can be seen that the log periodic antenna comprises of several dipole
elements each of which are of varying lengths and spacings. The dipole lengths and the
spacing between them smoothly increases in dimension as one moves from the smallest
element to the largest one. It is upon this feature, the design of the LPDA is based on, which
allows the alterations in frequency without much hampering the electrical operations of the
antenna. Each of the individual elements are being connected alternately to a distributive
type feeder system. This means that the antenna elements are connected in opposition with
each other such that the total radiation should be produced towards the direction of the
shorter elements while the radiation towards the larger side should cancel out. A coaxial line
is used for this purpose which runs through the feeder from the longest to the shortest
antenna element. The radiated energy at a specific frequency, travel through the feeder till it
arrives at the location where the dipole electrical lengths, as well as the phase relationships,
are such that the radiation is produced. With frequency change, the resonant element
position switches smoothly from one element to the following elements. The lower and
upper-frequency limit is then found out by the lengths of the longest and shortest elements
or on the contrary, the lengths must be selected in such a way that the bandwidth need is
satisfied. The longest element corresponds to approximately ¼ wavelength at the lower
frequency limit while the half element in the shorter side should be approximately ¼ λ at the
higher frequency limit in the required operational bandwidth.
8.5.1 Design Equation Of Log Periodic Antenna:
The Log periodic structures do not depend on frequency such that the electrical parameters
change periodically with the logarithm of the frequency. When the frequency, f1, shifts to
frequency, f2, within the antenna passband then the relationship is
f1
f 2=
τ
Where τ = design parameter; a constant; τ <1.0
f1
f 3= 2 (8.22)
τ
⋮
f1
f n= (8.23)
τ n−1
Where f 1= lowest frequency and f n= highest frequency
The design constant τ is related to the element length ‘l’ as well as the inter element spacing
‘d’ by:
l 2=τ l 1 (8.24)
l 3=τ l 2 (8.25)
d 34 =τ l 23 (8.28)
Figure 8.14 Configuration and antenna pattern of Log Periodic Dipole Antenna (LPDA) (source: Rahman, I., & Aftab
Uddin Tonmoy, S. (2014). Designing Of Log Periodic Dipole Antenna (Lpda) And It’s Performance Analysis)
Yet it has been seen that the largest elements display resonance beneath the operating frequency and
found to be of inductive nature while those at the front has capacitive nature. Hence, the element just
behind the active region behave as reflector while the front one’s act as directors. Assuming, a LPDA
is designed to be operative for a frequency range then the design should also incorporate an active
region of antenna elements for the lower and higher considered frequency limit having a bandwidth
Bar.
Design
8.5.2 Figure Calculation
8.15 Contour Ofconstant
plot of the Log Periodic Antenna
directivity in dB versus τFor
and Capturing Solar
(The ARRL Antenna Radio
Book. Burst
21st Edition,
ARRL, Connecticut, USA.
Signal:
In order to design the log periodic antenna for capturing the solar radio burst signal,
following design procedure have been followed from The ARRL Antenna Book (21st Edition)
=
[(
1
4
1−
1
39.531 ) ]
1.4359 19.68 = 6.8852 feet ≈ 2.1 m
9. DATA ANALYSIS;
Analysis of data is a very important task since it is the source of information. It provides the
validation of theories and models as well as their improvements. Data analysis sometimes can give
birth of a new theory or model. But in reality the problem is that any kind of time series data either
from an experiment or from a dynamical system, or from any economic, sociological or biological
aspect, usually contains systemic or manual error. Analysis of such data in presence of noise often
leads to a wrong interpretation of the data. So we need to develop an initial platform by denoising
the data from which we can start the extensive study on it. Filtering a time series data is always an
indispensable task to deal with.
1. MOVING AVERAGE: 3-point or 5-point moving average method:
Most information loss in 3 point and 5-point moving average method [2]. While concentrating on
the process of reducing the noise we are agreed to him that we cannot compromise heavily with
the trend and characteristic of the time series data. But it is violated in the case of 3-point and 5-
point moving average methods. This clearly reveals that we are missing proper information at i-th
place as we are busy with making average of the information at the neighbouring places instead of
giving proper weight to the information at i-th place of the original time series..
The MA filter performs three important functions:
1) It takes input points, computes the average of those -points and produces a single output
point
2) Due to the computation/calculations involved, the filter introduces a definite amount of delay
3) The filter acts as a Low Pass Filter (with poor frequency domain response and a good time
domain response).
The difference equation for a -point discrete-time moving average filter with input represented
by the vector and the averaged output vector , is
L−1
1
y[n]= L ∑ x [n−k ]
k =0
For example, a -point Moving Average FIR filter takes the current and previous four samples of
input and calculates the average. This operation is represented as shown in the Figure 1 with the
following difference equation for the input output relationship in discrete-time.
y[n]=1/5 {x[n]+x[n−1]+x[n−2]+x[n−3]+x[n−4}
=0.2{x[n]+x[n−1]+x[n−2]+x[n−3]+x[n−4]}
F
igure : Discrete-time 5-point Moving Average FIR filter
ii. KALMAN FILTER: In 1960, R.E. Kalman published his famous papers describing a
recursive solution to the discrete-data linear filtering problem. Since that time, due in large part to
advances in digital computing; the Kalman filter has been the subject of extensive research and
application, particularly in the area of autonomous or assisted navigation. The Kalman filter is a
set of mathematical equations that provides an efficient computational (recursive) means to
estimate the state of a process, in a way that minimizes the mean of the squared error. The filter is
very powerful in several aspects: it supports estimations of past, present, and even future states,
and it can do so even when the precise nature of the modeled system is unknown. Kalman filter
of a time series data {xi}; i=1, 2… n conceptually is analogous to updating a running average :
yi={(i-1)/i}xi-1+(1/i)xi; i=1, 2, …, n.
It should work in far better manner than the 3-point or 5-point moving average method as in the
search of the i-th information this method concentrates only on the (i-1)-th and i-th data of the
original time series by taking their convex linear combination rather than by starting over each
time with simple method of average.
iii. EMD AND HT: Empirical Mode Decomposition is a method of decomposition of complex
multicomponent signal x (t) into a series of Lsignals hi ( t ) , 1<i< Lknown as Intrinsic Mode
Functions (IMF) in d decreasing order of frequencies. Mathematically:
L
y ( t ) =∑ hi (t)+ d ( t )
i=1
Where d ( t ) is the residue. Each one of the IMFs, say the i thonehi ( t ), is estimated with an
iterative process, known as sifting, applied to the residual multicomponent signal.
The Intrinsic Mode Functions are defined by two statements:
1. In the whole set of data the number of local maxima or minima must be equal to the number of
zero crossing or differ at most by one.
2. At any point of time, the mean value of the “upper envelope” (defined by the local Maxima) and
the “lower envelope” (defined by the local Minima) must be zero.
T
SD=∑ ¿ ¿ ¿ ¿.
t=0
So the calculation yields the iteration for finding the IMFs must be stopped when 0.2< SD<0.3 .
iv. WAVELET ANALYSIS: This section describes the method of wavelet analysis, includes a
discussion of different wavelet functions, and gives details for the analysis of the wavelet power
spectrum. Results in this section are adapted to discrete notation from the continuous formulas
given in Daubechies, Weng and Lau , and Meyers et al. (1993). The noise-reduction algorithm
consists of the following steps
Step 1: In the first step, we differentiate the noisy signal x(t) to obtain the data xd(t), using the
central finite difference method with fourth order correction to minimize the error ] i.e.
xd(t)=dx(t)/dt
Step 2: We then take the Discrete Wavelet Transform of the data xd(t) and obtain wavelet
coefficients Wj,k at various dyadic scales j and displacements k. A dyadic scale is the scale
whose numerical magnitude is equal to 2 rose to an integer exponent and is labelled by the
exponent. Thus the dyadic scale j refers to a scale of size 2j. In other words, it indicates a
resolution of 2j data points. Thus a low value of j implies a finer resolution, while high j analyzes
the signal at a larger resolution. This transform is the discrete analogue of continuous Wavelet
Transform [14, 19, 20] and it can be represented by the following formula
With
Where j, k are integers. As for the wavelet function ψ(t), we have chosen Daubechies’compactly
supported orthogonal function with four filter coefficients [14]. Here for simplicity we take
In this step we estimate the power Pj contained in different dyadic scales j, via
By plotting the variation of Pj with j, we see that it is possible to identify a scale jm at which the
power due to noise falls of rapidly. This is important because, as we shall see from the studies on
the case examples, it provides an automated means of detecting the threshold. Identification of the
scale jm at which the power due to noise shows the first minimum allows us to reset all Wj,k up to
scale index jm to zero, that is Wj,k=0, for j=1,2,…., jm
Step 4: In the fourth step, we reconstruct the denoised data ^x d(t) by taking the inverse transform
of the coefficients Wj,k
^
This set of obtained ^x d(t) gives measure of time variation in the signal. Upon differentiation the
contribution due to white noise moves towards the finer scales because the process of
differentiation converts the uncorrelated stochastic process to a first order moving average process
and thereby distributes more energy to finer scales. Finally we plot the graph of ^x d(t) vs. t to
obtain the corresponding peaks.
∑
2
y ( n )= ( p ( n )−x [ n ] )
n =−m
( )
m n 2
¿ ∑ ∑ ak n −x [ n ] k
…………………(2)
n=−m k=o
The analysis is the same for any other group of 2M + 1 input samples. We shall refer to m as
the “half width” of the approximation interval. the smoothed output value is obtained by evaluating
p(n) at the central point n = 0. That is, y(0), the output at n= 0, is
y [0]= p(0)=a 0
that the output value is just equal to the 0th polynomial coefficient.
This leads to nonlinear phase filters, which can be useful for smoothing at the ends of finite-
length input sequences. The output at the next sample is obtained by shifting the analysis interval to
the right by one sample, redefining the origin to be the position of the middle sample of the new block
of 2M+1 samples, and repeating the polynomial fitting and evaluation at the central location. This
can be repeated at each sample of the input, each time producing a new polynomial and a new value
of the output sequence y[n]. Savitzky and Golay showed that at each position, the smoothed output
value obtained by sampling the fitted polynomial is identical to a fixed linear combination of the local
set of input samples; i.e., the set of 2M + 1 input samples within the approximation interval are
effectively combined by a fixed set of weighting coefficients that can be computed once for a given
polynomial order N and approximation interval of length 2M+1. That is, the output samples can be
computed by a discrete convolution of the form.
m
y [0]= ∑ h [ m ] x [n−m]
n=−m
n +M
= ∑ h [ n−m ] x [m]
m=n− M
Anyway we have applied the model on both of the entire unfiltereded data series of the solar
irradiance and Forbush Decrease in the present case and have plotted the filtered data in figure 1b
and 2b. Now these smoothed data series are considered for a new nonparametric approach to estimate
Granger causality directly.
vi. SIMPLE EXPONENTIAL SMOOTHING: the prescribed model for a time series data
{xi}; i=1, 2, …., n , y1=x1 and yi=αxi + (1-α) yi-1; i=2, 3, …., n
where, yi is the smoothed data at the i-th position and a (0< a< 1) is
a parameter. This is analogous to
y1=x1
and yi=αxi+α(1-α)xi-1+α(1-α)2xi-2+….+a(1-a)i-2x2+(1-a)i-1x1
for i=2, 3, …, n
where the sum of the corresponding weights a, a(1-a), a(1-a)2, …., a(1-a)i-2 and (1-a)i-1 is
equal to unity. Thus in effect, each smoothed value is a convex linear combination of all the
previous observations as well as the current observation. We believe that denoising a time series
data can be performed with a satisfactory level of accuracy if we concentrate on the following two
matters:
i) Certain cases may arise in which the error generated in a certain position propagates to the next
stages. In those cases while trying to develop a smoothing model we must consider this
propagation of error and must try to fight against it.
ii) While extracting the new time series data by filtering the old one we must keep in mind the
positional importance of data i.e. if {yi} be the newly developed time series data by filtering the
old one {xi}; i=1, 2, …., n the yi’s must be generated mostly from the corresponding xi’s. In the
case of Kalman [3,4] filter the positional importance is not maintained since in the expression of i-
th filtered data more weightage is given in xi-1 than in xi. Keeping the above two points in mind in
this study we make an effort to apply Simple exponential smoothing technique as it can more
reliable to reduce the noise mixed in the astronomical dataset . But still here the contradiction
remains about the value of the co efficient in the polynomial. We probably made it clear in our
earlier discussion [ ] that the value of α should be near by 0.5 or o.511 as it provides best result[it
was shown below] without hampering positional importance of the data in sense of filtration.
. Thus we can certainly accept the Simple Exponential Smoothing and Savitzky Golay Filter as a
favourable example of our General Adaptive Rule of Filtering .In absence of the propagation of
error , the percentage reduction of the total error pre
Finite Variance Scaling Analysis: A well known version of Finite Variance Scaling Method
(FVSM) is the Standard Deviation Analysis (SDA) [1, 2], which is based on the evaluation of the
standard deviation D(t) of the variable x(t).
[ { }]
j j 2 1
2
∑ x (t ) 2
∑ x (t i) for j = 1,2…n. Eventually it is observed [1,2,3] The Hurst
i=1 i=1
D ( t j )= −
j j
exponent occurs in several areas of applied mathematics, including fractals and chaos theory, long
memory processes and spectral analysis. Hurst exponent estimation has been applied in areas
ranging from astrophysics to computer networking. It applies to data sets that are statistically self-
similar. Statistically self-similar means that the statistical properties for the entire data set are the
same for sub-sections of the data set. Estimation of the Hurst exponent was originally developed
in hydrology. In fact, the estimation also provides for a time series data set provides a measure of
whether the data is a pure random walk or has underlying trends. That process with an underlying
trend defines it’s own randomness with some degree of autocorrelation. When the autocorrelation
has a very long decay, then the process is sometimes referred to as a long memory process
similarly the vice versa is true for Short memory process . It has been found that sometimes
processes which exhibits Hurst exponent values for long memory processes are purely random
The astronomical phenomologies like Total Solar Irradiance (TSI) variation and Forbush
decrease (FD) indices are very important for the understanding of solar internal structure and the
solar terrestrial relationships. Hurst et al., in their pioneering work introduced the notion of
H
rescaled range analysis of a time series that takes the scaling form of D ( t ) ∝t (H is now
called the Hurst exponent). This stimulated Mandelbrot to introduce the concept of
fractional Brownian motion (FBM) [5]. In a random walk context the value H >0:5 indicates
uncorrelated noise, 0 < H < 0:5 indicates antipersistent noise, and 0:5 < H <
1 indicates persistent or long-range correlated noise [4]. Alternative scaling
∝
methods applied to a time series , where P ( k )=Ck −∝ , where H=1− 2 focus
on the autocorrelation function on the power spectrum
representation PS and on the evaluation of the variance of the
diffusion generated by All such scaling methods are related to the
original Hurst’s analysis and yieldhis H exponent for sufficiently long time series. All such
scaling methods are related to the original Hurst’s analysis and yield this Hurst exponent for
sufficiently long time series. The value of the Hurst exponent ranges between 0 and 1. A value of
0.5 indicates a true random walk (a Brownian time series). In a random walk there is no correlation
between any element and future element. A Hurst exponent value 0<H<0.5 will exist for a time
series with anti-persistent behaviour (negative autocorrelation). If the Hurst exponent is
0.5<H<1.0, the process will be a long memory process. A Hurst exponent value in this range
indicates persistent behaviour (or, a positive autocorrelation) and therefore the time series will
certainly follow a trend. We do have a conventional method of run test in data mining to check
whether two given time series data are random or not. But this FVSM is more useful in this
context since it not only provides the answer of the question of randomness in the data but also
provides the measure of persistence or anti-persistence in it[ 8,9,10].
Demonstration 2.2.a.i : Application of FVSM on TSI ; Graphical representation ; Conclusion
Results: This plot is fitted with equation (4) yielding H =0.1158. As the estimated value of H in
the present case is less than 0.5 it can be suggested that the present data is anti-persistent in
behavior (i.e. negatively auto correlated) . Further Fractal dimension analysis is needed to be
checked for confirmation
Demonstration 2.2.a.ii : Application of FVSM on FDi ; Graphical representation ; Results
Result:the plot of log of SD against log of time calculated from the present filtered Forbush
Decreases indices. This plot is fitted with yielding H =0.15. As the estimated values of H in the
present case is less than 0.5 it can be suggested that the present data is anti-persistent in behaviour
and the process is a short memory process. Further confirmation is required.
FRACTAL GEOMETRY:
Fractality Analysis: Fractal method has been studied to understand the irregular and chaotic nature
of graphical structure. Since fractals were introduced in physics, their applications promoted enormous
progress in understanding phenomena that are most directly involved in formation of irregular
structures. A broad class of clustering phenomena such as filtration, electrolysis and aggregation of
colloids and aerosols has received a good deal of attention. Other phenomena that are not strictly
clustering effects (i.e., dielectric breakdown, formation of a contact surface when two liquids are mixed
etc.) can be advantageously treated using fractals. Describing natural objects by geometry is as old as
science itself; traditionally this has involved the use of Euclidean lines, rectangles, cuboids, spheres and
so on. But, nature is not restricted to Euclidean shapes. More than twenty years ago Benoit B.
Mandelbrot observed that “clouds are not spheres, mountains are not cones, coastlines are not circles,
bark is not smooth, nor does lightning travel in a straight line”. Most of the natural objects we see
around us are so complex in shape to deserve being called geometrically chaotic. They appear
impossible to describe mathematically and used to be regarded as the “monsters of mathematics”. In
1975, Mandelbrot introduced the concept of fractal geometry to characterize these monsters
quantitatively and to help us to appreciate their underlying regularity. The simplest way to construct a
fractal is to repeat a given operation over and over again deterministically. The classical Cantor set is a
simple text book example of such a fractal. It is created by dividing a line into n equal pieces and
removing (nm) of the parts created and repeating the process with m remaining pieces ad infinitum.
However, fractals that occur in nature occur through continuous kinetic or random processes. Having
realized this simple law of nature, we can imagine selecting a line randomly at a given rate, and
dividing it randomly, for example. We can further tune the model to determine how random this
randomness is. Starting with an infinitely long line we obtain an infinite number of points whose
separations are determined by the initial line and the degree of randomness with which intervals were
selected. The properties of these points appear to be statistically self-similar and characterized by the
fractal dimension, which is found to increase with the degree of increasing order and reaches its
maximum value in the perfectly ordered pattern. It is now accepted that when the power spectrum of an
irregular time series is expressed by a single power law F , the time series shows a property of a
fractal curve. As the fractal length L(k) of the time series is expressed as L(k) kD where k is the
time interval, the fractal dimension D is expected to be closely related to the power law index . The
relation between and D has been investigated by Higuchi and it is given by D = (5)/2.
We can determine the randomness of a time series by determining fractal dimensions and from this we
can conclude whether a physical structure is chaotic in nature or not For any physical structure D lies
between 1 and 2. For the ideal case of chaotic physical structure D is 5/3, which describes
inertial range turbulence in an incompressible fluid. If the value of D lies between 1 and 5/3 the time
series is expected to follow some trend. Again if the range for D is in between 5/3 and 2 it is
usually not obeying any trend. .