Non-Conventional Energy Sources

You might also like

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 34

(B) Non-conventional energy sources:

The conventional energy sources discussed above are exhaustible and in some cases, installation of
plants to get energy is highly expensive. In order to meet the energy demand of increased population,
the scientists developed alternate nonconventional natural Resources sources of energy which should
be renewable and provide a pollution free environment.
Some nonconventional, renewable and inexpensive energy sources are described below:
1. Solar energy:
Solar energy, a primary energy source, is non-polluting and inexhaustible.
There are three methods to harness solar energy:
(i) Converting solar energy directly into electrical energy in solar power stations using photo cells or
photovoltaic cells or silicon solar cell.
(ii) Using photosynthetic and biological process for energy trapping. In the process of photosynthesis,
green plants absorb solar energy and convert it into chemical energy, stored in the form of
carbohydrate.
(iii) Converting solar energy in to thermal energy by suitable devices which may be subsequently
converted into mechanical, chemical or electrical energy.
Since solar energy is non-ending and its conversion to some other energy form is non polluting,
attention should be paid for the maximum utilization of solar energy.
2. Wind energy:
Wind is air in motion. The movement of air takes place due to the convection current set out in the
atmosphere which is again due to heating of earths surface by solar radiation, rotation of earth etc.
The movement of air occurs both horizontally and vertically.
The average annual wind density is 3 kW/m2/day along costal lines of Gujarat, western ghat central
parts of India which may show a seasonal variation (i.e., in winter it may go up to 10kW/m2/day).]
Since wind has a tremendous amount of energy, its energy can be converted into mechanical or
electrical energy using suitable devices, now days, wind energy s converted in to electrical energy
which is subsequently used for pumping water, grinding of corns etc. As per available data dearly
20,000 mW of electricity can be generated from wind. In Puri, wind farms are set up which can
generate 550 kW of electricity.
3. Tidal energy:
The energy associated with the tides of the Ocean can be converted in to electrical energy. France
constructed the first tidal power plant in 1966. India could take up Ocean thermal energy conversion
(OTEC) and by the process it will be capable of generating 50,000 mW of electricity, to meet the
power requirements of remote oceanic islands and coastal towns. The Netherlands is famous for
windmills. In India, Gujarat and Tamil nadu have windmills. The largest wind farm has been set at
Kanyakumari which generates 380 mW of electricity.
4. Geothermal energy:

The geothermal energy may be defined as the heat energy obtainable from hot rocks present inside the
earth crust. At the deeper region of earth crust, the solid rock gets melted in to magma, due to very
high temperature. The magma layer is pushed up due to some geological changes and get
concentrated below the earth crust. The places of hot magma concentration at fairly less depth are
known as hot spots.
These hot spots are known as sources of geothermal energy. Now a days, efforts are being made to use
this energy for generating power and creating refrigeration etc. There are a quite few number of
methods of harnessing geothermal energy. Different sites of geothermal energy generation are Puga
(Ladakh), Tattapani (Suraguja, M.P.), Cambay Basin (Alkananda Valley, Uttaranchal).
5. Bio-mass based energy:
The organic matters originated from living organisms (plants and animals) like wood, cattle dung,
sewage, agricultural wastes etc. are called as biomass. These substances can be burnt to produce heat
energy which can be used in the generation of electricity. Thus, the energy produced from the biomass
is known as biomass energy.
There are three forms of biomass:
(i) Biomass in traditional form:
Energy is released by direct burning of biomass (e.g. wood, agricultural residue etc.)
(ii) Biomass in nontraditional form:
The biomass may be converted in to some other form of fuel which can release energy. For example
carbohydrate can be converted into methanol or ethanol which may be used as a liquid fuel.
(iii) Biomass for domestic use:
When organic matters like cow dung, agricultural wastes, human excreta etc. subjected to bacterial
decomposition in presence of water in absence of air, a mixture of CH 4, C02, H2, H2S etc. is produced.
These gases together is known as biogas. The residue left after the removal of biogas is a good source
of manure and biogas is used as a good source of non-polluting fuel.
6. Biogas:
Biogas is an important source of energy to meet energy, requirements of rural area. As per given data,
around 22,420-million m3 of gas can be produced from the large amount of cow dungs obtained in
rural areas in a year. The gas is generated by the action of bacteria on cow dung in absence of air
(oxygen). There are two types of biogas plants namely. Fixed done type and floating gas holder type
(Fig.4.3 & 4.4).
These plants are commonly known as Gobar gas plants because the usual raw material is cow dung
(Gobar). The methodology involves in the process is to prepare a slurry of cow dung with water.
Sometimes form waters can also be added to the slurry.
The slurry is subjected to bacterial decomposition at 35 .C. There are about 330, 00 biogas plants in
India. All India dung production is about 11.30 kg per cattle and 11.60 kg per buffalo with about 67.10
m3 of gas per ton of wet dung.
Bioconversion mechanism

Bioconversion, also known as biotransformation, is the conversion of organic


materials, such as plant or animal waste, into usable products or energy sources by
biological processes or agents, such as certain microorganisms. One example is the
industrial production of cortisone, which one step is the bioconversion of progesterone to
11-alpha-Hydroxyprogesterone by Rhizopus nigricans. Another example is the
bioconversion of glycerol to 1,3-propanediol, which is part of scientific research for many
decades.

Another example of bioconversion is the conversion of organic materials, such as plant or animal
waste, into usable products or energy sources by biological processes or agents, such as certain
microorganisms, some detritivores or enzymes.
In the USA, the Bioconversion Science and Technology group performs multidisciplinary R&D for
the Department of Energy's (DOE) relevant applications of bioprocessing, especially with biomass.
Bioprocessing combines the disciplines of chemical engineering, microbiology and biochemistry. The
Group 's primary role is investigation of the use of microorganism, microbial consortia and microbial
enzymes in bioenergy research. New cellulosic ethanol conversion processes have enabled the variety
and volume of feedstock that can be bioconverted to expand rapidly. Feedstock now includes
materials derived from plant or animal waste such as paper, auto-fluff, tires, fabric, construction
materials, municipal solid waste (MSW), sludge, sewage, etc.

Bio gas digester


biogas digester, also known as a methane digester, is a piece of equipment which can turn organic
waste into usable fuel. In addition to providing a source of renewable fuel, biogas digesters also
provide low-cost fuel to people in poverty, and they help to dispose of waste materials which would
otherwise be discarded. A number of nations have invested in research on biogas digesters, ranging
from devices which can be used by a single household to industrial-scale equipment which could be
used to generate large amounts of power.
The biogas digester relies on bacterial decomposition of biomass, waste material which is biological
in origin, ranging from kitchen scraps to cow dung. As anyone who has walked past a poorly
maintained outhouse or compost pile is aware, when anaerobic conditions develop in a collection of
biomass, they attract bacterial organisms which emit a number of distinctive gases, most notably
methane, in the process of digestion. These gases are usually viewed as a symptom of inefficiency and
they are vented away for disposal, but they can actually be very useful.
In a biogas digester, anaerobic digestion and fermentation is actually encouraged, and the gases are
vented to a storage container. The resulting biogas can be burned as fuel for cooking, heating, and
electricity generation. The sludge which remains in the biogas digester after the fermentation process
is complete can be used for fertilizer. Because the process harnessed in a biogas digester is anaerobic
in nature, these devices are sometimes known as anaerobic digesters.
One of the major applications for biogas digesters is in the disposal of human and farm waste. In
many developing nations, controlling human waste is a major issue, and so is the availability of
energy. By providing communities with biogas digesters, organizations and the government could
help contribute to improvements in public health while also providing communities with a source of
sustainable energy. Farms, which can generate large volumes of animal waste, can also utilize biogas
disgesters to power their operations or to generate power which can be traded or sold.

Advocates for sustainable energy have promoted the biogas digester as one option which could be
used to reduce reliance on fossil fuels. In addition to providing sustainable energy, biogas digesters
are also low cost, efficient, and easy to maintain. This makes them an excellent tool for organizations
which attempt to empower communities by providing them with tools they can maintain and use
themselves, rather than just giving aid, which can cause communities to become dependent on outside
assistance.

Composition and calorific value of bio-gas


he service provides the determination of the concentration levels of permanent gases (CH 4 , CO2 , O2 ,
CO, H2 and N2 ) in situ and in subsequent studies. In the case of on-site service used
microchromatography gas with a thermal conductivity detector. For further study or in batch using the
technique of sampling in Tedlar bags gases (CH4 , CO2 , O2 , CO, H2 ,N2 , C2 -C7 , H2 S), as well as
sampling in multisorbent tubes for gases and VOC analysis is performed in the laboratory to gas
chromatography equiped with conductivity detector and thermal desorption coupled with gas
chromatography - mass spectrometry detector.

Type of waste
Human society produces some unwanted and discarded materials which are called wastes.
Wastes are produced from different activities such as household activities, agricultural activities
industrial activities, hospitals, educational institutions, mining operations, and so on.
These sources general different types of wastes, many of which are hazardous in nature. They cause
spread of many diseases.
Solid wastes:
The solid wastes are the useless and unwanted substances discarded by human society. These include
urban wastes, industrial wastes, agricultural wastes, biomedical wastes and radioactive wastes. The
term refuse is also used for solid waste.
Liquid wastes:
Wastes generated from washing, flushing or manufacturing processes of industries are called liquid
wastes. Such a waste is called sewage. The most common practice is to discharge it on the ground,
nallahs, rivers and other water bodies, often without any treatment.
Gaseous wastes:
These wastes are released in the form of gases from automobiles, factories, burning of fossil fuels etc.
and get mixed in the atmosphere. These gases include carbon monoxide, CO 2, sulphur dioxide,
nitrogen dioxide, ozone, methane, etc.
Wastes produced from different sources, are classified as follows:
Municipal Solid Waste:

The wastes, collected from the residential houses, markets, streets and other places mostly in the
urban areas and disposed of by municipal bodies are called municipal solid wastes (MSW). In general,
the urban solid wastes are called refuse. The Municipal solid wastes are a mixture of paper, plastic,
clothes, metals, glass, organic matter etc. generated from households, commercial establishments and
markets.
The proportions of different constituents vary from season to season and place to place depending on
the life style, food habits, standard of living and the extent of commercial and industrial activities in
the area. Municipal solid wastes are collected locally and the amount collected depends upon the size
and consumption of the population. The municipal wastes, their contents and sources are summarized
in the Table 17.1 & 17.2.
Industrial Wastes:
Industrial wastes are released from chemical plants, paint industries, cement factories, power plants,
metallurgical plants, mining operations, textile industries, food processing industries petroleum
industries and thermal power plants. These industries produce different types of waste products (Table
17.3). Industrial solid wastes can be classified into two groups.
Non-hazardous wastes:
These wastes are produced from food processing plants, cotton mills, paper mills, sugar mills and
textile industries.
Hazardous wastes:
Hazardous wastes are generated by nearly every industry. Metals, chemical, drugs, lather, pulp,
electroplating, dye, rubber are some of important examples. Liquid Industrial waste that runs into a
stream from a factory can kill the aquatic fauna and also cause health problems for humans.
Agricultural Wastes:
Agricultural areas produce plants and animals wastes. Excess use of fertilizer, pesticides and other
chemicals used in agriculture and the wastes formed from these cause land and water pollution. They
also contaminate the soil. Among pesticides chlorinated hydrocarbons, DDT, BHC, endrin, dieldrin,
lindane, parathion, malathion and endosulphon are important which are absorbed by the soil and
contaminate crops grown in the soil. Other agriculture wastes are produced from sugar factories,
tobacco processing units, slaughter houses, livestock, poultry etc.
Commercial Wastes:
With the advancement of modem cities, industries and automobiles, huge amount of wastes are
generated daily. These include markets, roads, buildings, hotels, commercial complexes, hostels, auto
workshops, printing press etc. Hospitals, nursing homes and medical institutes also release
tremendous amount of wastes which are hazardous and are much toxic in nature.
Many chemicals and disposable items are also produced from these units. These wastes are dumped in
inhabited areas which pose much danger to human health and life and cause several types of

infectious diseases. Apart from wastes, generated from the above sources, there are certain wastes
produced from mining activities and radioactive substances that cause much damage to the society
and environment.
Mining:
The wastes generated by mining activities disturb the physical, chemical and biological features of the
land and atmosphere. The wastes include the overburden material, mine tailings (the waste left after
ore has been extracted from rock), harmful gases released by blasting etc.
Radioactive substances:
Although every precaution is taken in the functioning and maintenance of nuclear reactors, yet it has
been observed that measurable amount of radioactive waste material escapes into the environment.
Other sources of radioactive wastes are from mining of radioactive substances and atomic explosion
etc.
Bio-medical wastes:
Wastes, which are produced from the hospitals, medical centres and nursing homes are called biomedical wastes. These wastes are highly infectious which include used bandages, infected needles,
animal remains, cultures, amputated body organs, dead human foetuses, wastes of surgery and other
materials from biological research centres. Pharmacies discard out-dated and unused drugs; testing
laboratories dispose of chemical wastes which are hazardous in the environment.
Classification of Wastes:
In general, the wastes are classified on the basis of their biological, chemical and physical properties
and also on the basis of nature.
Biodegradable Wastes:
These wastes are natural organic compounds which are degraded or decomposed by biological or
microbial action. Biodegradable wastes are generated in food processing units, cotton mills, paper
mills, sugar mills, textile factories and sewage Waste of slaughterhouses is biodegradable and some
part of it is used, for example, skin is used to make shoes. Most of the wastes from these industries are
reused. When these wastes are in excess they act as pollutants and are not easily decomposed and they
take much time for their decomposition.
Non-Biodegradable Wastes:
These are not decomposed by microbes but are oxidized and dissociated automatically. Coal stone,
metal scraps, sludge are generated from colliery operations Refineries produce inert dry solids and
varieties of sludge containing oil. Fly, ash is the major solid waste from thermal power plants.
Generally, these wastes are not reused and accumulate in the ecosystem and some of it move through
biogeochemical cycles. Non-biodegradable wastes also include DDT, pesticides, lead, plastics,
mercuric salts etc.

Hazardous wastes:
Many chemical, biological, explosive or radioactive wastes, which are highly reactive and toxic, pose
a severe danger to human, plants or animal life and are called hazardous wastes. They are highly toxic
in nature. Hazardous wastes, when improperly handled, can cause substantial harm to human health
and to the environment. Hazardous wastes may be in the form of solids, liquids, sludges or gases.
They are generated primarily by chemical production, manufacturing and other industrial activities.
The important hazardous wastes are lead, mercury, cadmium, chromium, many drugs leather,
pesticides, dye, rubber and effluents from different industries. They may cause danger during
inadequate storage, transportation, treatment or disposal operations. The hazardous waste materials
may be toxic, reactive, ignitable, explosive, corrosive, infectious or radioactive.
1. Ignitability:
These are wastes that easily catch fire with a flash point less than 60C. Such fires not only present
immediate dangers but can spread harmful particles over wide areas.
2. Corrosiveness:
These comprise mostly acidic or alkaline wastes that corrode other materials. These require special
containers for disposal and should be separated from other wastes as they release toxic contaminants.
3. Reactivity:
These are explosive or highly reactive wastes. These undergo violent chemical reactions and are
exploded to generate heat and toxic gases.
4. Toxicity:
These wastes release toxins or poisonous substances and pose hazards to human health and the
environment.
Impacts of Waste Accumulation:
Industrialization on a massive scale, increasing urbanization, advance technology in agriculture and
changing life pattern have resulted in the production of huge amount of wastes. The improper waste
disposal creates many ecological and social problems, for instance, accumulation of wastes in the
densely populated areas, disposal of urban sewage and industrial wastes discharged into rivers etc.
affect soil, air and water ecosystem. Chemical, biological and explosive wastes pose immediate or
long run danger to the life of man, plants and animals.
The dumping of solid wastes is hazardous to human health. It has been estimated that about twentyfive human diseases are associated with solid waste. There is an increase in the number of rats and
flies due to dumping of wastes in open places and they are the carriers of other organisms responsible
for several dreaded diseases.

The flies which carry pathogenic organisms are spreading diseases like dysentery, diarrhoea etc. It is
estimated that about 70,000 flies are produced in one cubic foot of garbage. Dumping of solid wastes
has a number of adverse effects on all the components of an ecosystem and they also affect the
aesthetic sense as well.
Some of the impacts of accumulation of wastes are described below:
Spoilage of Landscape:
It is a common practice to dump plastic bags, containers, vegetables, fruit peels, cans etc. in the open
area without thinking about its consequences (Fig. 17.1). We need to be fully aware that improper
disposal of waste spoils the beauty of the landscape.
Pollution:
Dumping of wastes in a haphazard and unscientific manner has serious environmental impact.
Most of the wastes contain organic compounds a number of inorganic minerals and other
harmful matter which contaminate the environment and lead to:
1. Degradation of land,
2. Pollution of drinking water,
3. Destruction of aquatic life,
4. Degradation of ground and surface water used for irrigation and industries, and
5. Improper disposal of wastes cause soil, air and water pollution.
Health Hazards:
Human health is directly concerned with the overall quality of the environment. However, in the last
few decades, due to humans desire for rapid advancement in industrialisation, agriculture and other
activities, much damage has been done to the environment. Large-scale deforestation, drastic climatic
changes and pollutants on land, air and water are some of the unpleasant consequences that ultimately
affect the human health.
It is a well-known fact that an adult healthy man is exposed everyday to polluted air through breathing
and to food and water through oral intake. Our skin is also exposed to the environmental chemicals
which lead to many health problems immediately or after sometime.
Health hazards due to air pollution:
Hazardous air pollutants present in the atmosphere affect human health both directly and indirectly. It
may be a short-term or long-term effect.
The following are the adverse effects on human health:

1. Toxic gas carbon monoxide reduces the blood oxygen and formation of haemoglobin, causing
injury to heart and central nervous system.
2. Sulphur dioxide and sulphuric acid both cause irritation in the respiratory tracts of humans and high
concentrations of sulphur dioxide leads to severe heart and lung diseases like bronchitis, asthma, etc.
3. Nitrogen oxide at higher concentration affects respiratory organs, liver and kidneys.
4. Ozone can seriously affect the pulmonary functions.
5. Lead can cause injury in blood-formation organs and nervous system, especially impairing of brain
functions of new-born babies.
6. Pesticides and radiations are other toxic air pollutants which are very dangerous for human health.
7. Metal, dusts, asbestos and hydrocarbons shorten the life span and cause deterioration of nervous
system and there is additional risk of cancer.
8. In mining operation, silica and dust cause pneumoconiosis (common disease in mine workers).
9. Petroleum components can affect the blood forming organs, brain, teeth bones etc.
10. Mercury and cadmium are known to damage the kidneys and brain.
Water is said to be polluted when its quality or composition is changed either naturally or as a result
of human activities. Nearly 80% of the human diseases in developing countries are due to polluted
water alone.
The well-known impacts of water pollutants are as follows:
1. A large numbers of industrial pollutants that come to human body through drinking water and
contaminated food threaten the life and health. The famous MINAMATA and ITAI-ITAI diseases took
a big toll of human life in Japan due to mercury and cadmium from the industrial effluents in the
aquatic ecosystem.
2. Some agrochemicals like chlorinated pesticides disposed in water accumulate in the aquatic food
chains and enter the human body causing heavy infection. In coastal Karnataka, several people died
by consuming crabs contaminated with pesticides.
3. Changes in water quality due to deficiency of iodine lead to goitre which has been found to be
endemic in many parts of India.
4. Many water borne diseases prevalent in the Indian population, like cholera, typhoid, gastroenteritis
and hepatitis are due to polluted water.
5. Excess fluorine in drinking water has caused bone and teeth diseases (fluorosis), the most severe
disease is the KNOCK-KNEE syndrome in Andhra Pradesh.

Health Hazards due to Soil or Land Pollution:


The accumulation of toxic chemical compounds, salts, disease-causing organisms and radioactive
materials in the soil cause various health problems.
The impact of waste accumulation in soil/land has shown the following major health effects:
1. The impact of land pollution on human health is indirect. The pollutants added in the soil enter the
human body through water or air through the food chain.
2. Several agrochemicals like DDT, fluorine, arsenic, lead compounds and organ phosphorus
compounds are super toxic and cause symptoms like nausea, vomiting, diarrhea, sweating, salivation
and muscular tremors.
3. Some rodenticides as strychnine, sodium fluoro-acetate etc.. are blood coagulants.
4. Ethylene dichloride, ethylene dibromide and methyl dibromide accumulate in liver, kidney, heart,
spleen and cause degenerative lesions.
Impact of Waste Accumulation on Terrestrial Life:
Hazardous wastes may pollute soil, air, surface water and underground water. The oil pollutants may
affect man, plants and animals. These toxic substances are transferred to different organisms through
the food chain and cause a number of complications in living organisms.
Some of these are as follows:
1. Many toxic chemicals, pesticides, other agricultural wastes released into the environment that are
taken up by the plants from air, water and soil. Plants growing under such conditions are severely
affected by these toxic chemicals.
2. Exposure to high concentration of pollutants may cause acute injuries like chlorosis, discolouration
and even the death of plants.
3. Crops show reduced productivity and yield. The quality of plant nutrients is also decreased.
4. Sulphur dioxide is a most toxic pollutant which damage the crops.
5. In recent years, the losses to agriculture and animal life due to fluoride content have greatly
increased.
6. Besides morphological changes, biochemical and physiological changes have also been observed in
many mammals including man.
7. Too much accumulation of wastes disturb the behaviour of wild and domestic animals and also
cause health problems.
8. Some highly toxic chemicals lead to genetic disorders in animals.

9. Several domestic animals like cow, buffalo, goat etc. often eat polythene and plastics bags along
with food material which ultimately reach to their alimentary canal causing many disorders and even
their death.
Impact of Waste Accumulation on Fresh Water:
Large amount of wastes of human society are disposed of in the rivers, lakes, ponds and other aquatic
bodies making the water polluted which is not fit for drinking and other domestic purposes.
The impacts of waste dumping on aquatic life are as follows:
1. The toxic wastes reaching the water bodies badly disturb the aquatic life.
2. The sewage of cities is often drained into the rivers, which is dangerous to flora, fauna and human
life.
3. Due to heavy accumulation of wastes into the canals, lakes and rivers, oxygen concentration is
reduced considerably thus affecting the life of fishes and other aquatic populations. In extreme
deficiency of oxygen most of the fishes die.
4. Sewage from municipalities, sanatoria and tanneries discharged into the rivers, canals and lakes etc.
carry many species of bacteria and other microbes which cause diseases in human and animals.
5. Some pollutants for example heavy metals, cyanides and several other organic and inorganic
compounds are harmful to aquatic organisms. Many of them especially non-biodegradable ones
accumulate in the body of organisms and cause long-term effects.
6. Biodiversity decreases in highly polluted aquatic habitats.
7. The DDT and other pesticides present in very low concentrations in water may accumulate to
higher concentration within algae, insects and fishes. The birds or people that feed on these fishes are
then exposed to very high levels of hazardous substances. In birds, these substances can affect the egg
production and bone formation.
Impact of Waste Accumulation on Marine Life:
One of the least known but most significant uses of the sea is as an enormous dumpsite. In the past,
the oceans were able to assimilate the wastes of the civilization without noticeable adverse effects.
However, industrialization and other associated developments along with sharp increase in global
population have given rise to huge amounts of wastes that are now taxing the capacity of the oceans to
absorb them. Human wastes ranging from the raw sewage of urban centres to junked appliances and
automobiles have heavily polluted the sea shores.
The impacts of waste dumping on marine life are as follows:
1. The growth of marine algae is affected.

2. Massive oil spills not only spoil innumerable beaches and estuaries but also cause widespread
damage to marine life.
3. Herbicides and pesticides (especially the organ chlorides) reach the oceans via the wind and rivers
and contaminate marine water.
4. It is a matter of great concern that mangrove forests are being damaged at an alarming rate due to
disposal of wastes along sea shores.
5. Thermal and radioactive pollution have disturbed the life of fishes in estuaries and coastal
ecosystems. Their breeding is also affected adversely.

Tidal energy
Tidal power, also called tidal energy, is a form of hydropower that converts the energy obtained
from tides into useful forms of power, mainly electricity.
Although not yet widely used, tidal power has potential for future electricity generation. Tides are
more predictable than wind energy and solar power. Among sources of renewable energy, tidal power
has traditionally suffered from relatively high cost and limited availability of sites with sufficiently
high tidal ranges or flow velocities, thus constricting its total availability. However, many recent [when?
clarification needed]
technological developments and improvements, both in design (e.g. dynamic tidal power,
tidal lagoons) and turbine technology (e.g. new axial turbines, cross flow turbines), indicate that the
total availability of tidal power may be much higher than previously assumed, and that economic and
environmental costs may be brought down to competitive levels.
Historically, tide mills have been used both in Europe and on the Atlantic coast of North America. The
incoming water was contained in large storage ponds, and as the tide went out, it turned waterwheels
that used the mechanical power it produced to mill grain. [1] The earliest occurrences date from the
Middle Ages, or even from Roman times.[2][3] It was only in the 19th century that the process of using
falling water and spinning turbines to create electricity was introduced in the U.S. and Europe. [4]
The world's first large-scale tidal power plant was the Rance Tidal Power Station in France, which
became operational in 1966. It was the largest tidal power station in terms of output until Sihwa Lake
Tidal Power Station opened in South Korea in August, 2011. The Sihwa station uses sea wall defense
barriers complete with 10 turbines generating 254 MW.[5]

Generation of tidal energy


Tidal power is taken from the Earth's oceanic tides. Tidal forces are periodic
variations in gravitational attraction exerted by celestial bodies. These forces
create corresponding motions or currents in the world's oceans. Due to the
strong attraction to the oceans, a bulge in the water level is created, causing a
temporary increase in sea level. When the sea level is raised, water from the
middle of the ocean is forced to move toward the shorelines, creating a tide. This
occurrence takes place in an unfailing manner, due to the consistent pattern of
the moons orbit around the earth. [7] The magnitude and character of this motion
reflects the changing positions of the Moon and Sun relative to the Earth, the
effects of Earth's rotation, and local geography of the sea floor and coastlines.
Tidal power is the only technology that draws on energy inherent in the orbital characteristics of the
EarthMoon system, and to a lesser extent in the EarthSun system. Other natural energies exploited
by human technology originate directly or indirectly with the Sun, including fossil fuel, conventional

hydroelectric, wind, biofuel, wave and solar energy. Nuclear energy makes use of Earth's mineral
deposits of fissionable elements, while geothermal power taps the Earth's internal heat, which comes
from a combination of residual heat from planetary accretion (about 20%) and heat produced through
radioactive decay (80%).[8]
A tidal generator converts the energy of tidal flows into electricity. Greater tidal variation and higher
tidal current velocities can dramatically increase the potential of a site for tidal electricity generation.
Because the Earth's tides are ultimately due to gravitational interaction with the Moon and Sun and
the Earth's rotation, tidal power is practically inexhaustible and classified as a renewable energy
resource. Movement of tides causes a loss of mechanical energy in the EarthMoon system: this is a
result of pumping of water through natural restrictions around coastlines and consequent viscous
dissipation at the seabed and in turbulence. This loss of energy has caused the rotation of the Earth to
slow in the 4.5 billion years since its formation. During the last 620 million years the period of
rotation of the earth (length of a day) has increased from 21.9 hours to 24 hours;[9] in this period the
Earth has lost 17% of its rotational energy. While tidal power will take additional energy from the
system, the effect[clarification needed] is negligible and would only be noticed over millions of years

Generating methods
Tidal stream generator
Tidal stream generators (or TSGs) make use of the kinetic energy of moving water to power turbines,
in a similar way to wind turbines that use wind to power turbines. Some tidal generators can be built
into the structures of existing bridges or are entirely submersed, thus avoiding concerns over impact
on the natural landscape. Land constrictions such as straits or inlets can create high velocities at
specific sites, which can be captured with the use of turbines. These turbines can be horizontal,
vertical, open, or ducted and are typically placed near the bottom of the water column where tidal
velocities are greatest.[12]

Tidal barrage
Tidal barrages make use of the potential energy in the difference in height (or hydraulic head)
between high and low tides. When using tidal barrages to generate power, the potential energy from a
tide is seized through strategic placement of specialized dams. When the sea level rises and the tide
begins to come in, the temporary increase in tidal power is channeled into a large basin behind the
dam, holding a large amount of potential energy. With the receding tide, this energy is then converted
into mechanical energy as the water is released through large turbines that create electrical power
through the use of generators.[13] Barrages are essentially dams across the full width of a tidal estuary.

Dynamic tidal power


Dynamic tidal power (or DTP) is an untried but promising technology that would exploit an
interaction between potential and kinetic energies in tidal flows. It proposes that very long dams (for
example: 3050 km length) be built from coasts straight out into the sea or ocean, without enclosing
an area. Tidal phase differences are introduced across the dam, leading to a significant water-level
differential in shallow coastal seas featuring strong coast-parallel oscillating tidal currents such as
found in the UK, China, and Korea.

Tidal lagoon
A newer tidal energy design option is to construct circular retaining walls embedded with turbines that
can capture the potential energy of tides. The created reservoirs are similar to those of tidal barrages,
except that the location is artificial and does not contain a preexisting ecosystem. [12] The lagoons can
also be in double (or triple) format without pumping[14] or with pumping[15] that will flatten out the
power output. The pumping power could be provided by excess to grid demand renewable energy
from for example wind turbines or solar photovoltaic arrays. Excess renewable energy rather than
being curtailed could be used and stored for a later period of time. Geographically dispersed tidal
lagoons with a time delay between peak production would also flatten out peak production providing
near base load production though at a higher cost than some other alternatives such as district heating
renewable energy storage. The proposed Tidal Lagoon Swansea Bay in Wales, United Kingdom
would be the first tidal power station of this type once built.

Environmental concerns
Tidal power can have effects on marine life. The turbines can accidentally kill swimming sea life with
the rotating blades, although projects such as the one in Strangford feature a safety mechanism that
turns off the turbine when marine animals approach. [40] Some fish may no longer utilize the area if
threatened with a constant rotating or noise-making object. Marine life is a huge factor when placing
tidal power energy generators in the water and precautions are made to ensure that as many marine
animals as possible will not be affected by it. The Tethys database provides access to scientific
literature and general information on the potential environmental effects of tidal energy.[41]
Tidal turbines
The main environmental concern with tidal energy is associated with blade strike and entanglement of
marine organisms as high speed water increases the risk of organisms being pushed near or through
these devices. As with all offshore renewable energies, there is also a concern about how the creation
of EMF and acoustic outputs may affect marine organisms. It should be noted that because these
devices are in the water, the acoustic output can be greater than those created with offshore wind
energy. Depending on the frequency and amplitude of sound generated by the tidal energy devices,
this acoustic output can have varying effects on marine mammals (particularly those who echolocate
to communicate and navigate in the marine environment, such as dolphins and whales). Tidal energy
removal can also cause environmental concerns such as degrading farfield water quality and
disrupting sediment processes.[42] Depending on the size of the project, these effects can range from
small traces of sediment building up near the tidal device to severely affecting nearshore ecosystems
and processes.[43]
Tidal barrage
Installing a barrage may change the shoreline within the bay or estuary, affecting a large ecosystem
that depends on tidal flats. Inhibiting the flow of water in and out of the bay, there may also be less
flushing of the bay or estuary, causing additional turbidity (suspended solids) and less saltwater,
which may result in the death of fish that act as a vital food source to birds and mammals. Migrating
fish may also be unable to access breeding streams, and may attempt to pass through the turbines. The
same acoustic concerns apply to tidal barrages. Decreasing shipping accessibility can become a socioeconomic issue, though locks can be added to allow slow passage. However, the barrage may improve

the local economy by increasing land access as a bridge. Calmer waters may also allow better
recreation in the bay or estuary.[43] In August 2004, a humpback whale swam through the open sluice
gate of the Annapolis Royal Generating Station at slack tide, ending up trapped for several days
before eventually finding its way out to the Annapolis Basin.[44]
Tidal lagoon
Environmentally, the main concerns are blade strike on fish attempting to enter the lagoon, acoustic
output from turbines, and changes in sedimentation processes. However, all these effects are localized
and do not affect the entire estuary or bay.

Wind power
Wind power is the use of air flow through wind turbines to mechanically power generators for
electricity. Wind power, as an alternative to burning fossil fuels, is plentiful, renewable, widely
distributed, clean, produces no greenhouse gas emissions during operation, consumes no water, and
uses little land.[2] The net effects on the environment are far less problematic than those of
nonrenewable power sources.
Wind farms consist of many individual wind turbines which are connected to the electric power
transmission network. Onshore wind is an inexpensive source of electricity, competitive with or in
many places cheaper than coal or gas plants. [3][4][5] Offshore wind is steadier and stronger than on land,
and offshore farms have less visual impact, but construction and maintenance costs are considerably
higher. Small onshore wind farms can feed some energy into the grid or provide electricity to isolated
off-grid locations.[6]
Wind power gives variable power which is very consistent from year to year but which has significant
variation over shorter time scales. It is therefore used in conjunction with other electric power sources
to give a reliable supply. As the proportion of wind power in a region increases, a need to upgrade the
grid, and a lowered ability to supplant conventional production can occur.[7][8] Power management
techniques such as having excess capacity, geographically distributed turbines, dispatchable backing
sources, sufficient hydroelectric power, exporting and importing power to neighboring areas, using
vehicle-to-grid strategies or reducing demand when wind production is low, can in many cases
overcome these problems.[9][10] In addition, weather forecasting permits the electricity network to be
readied for the predictable variations in production that occurs.

Generator characteristics and stability


Induction generators, which were often used for wind power projects in the 1980s and 1990s, require
reactive power for excitation so substations used in wind-power collection systems include substantial
capacitor banks for power factor correction. Different types of wind turbine generators behave
differently during transmission grid disturbances, so extensive modelling of the dynamic
electromechanical characteristics of a new wind farm is required by transmission system operators to
ensure predictable stable behaviour during system faults. In particular, induction generators cannot
support the system voltage during faults, unlike steam or hydro turbine-driven synchronous
generators.

Today these generators aren't used any more in modern turbines. Instead today most turbines use
variable speed generators combined with partial- or full-scale power converter between the turbine
generator and the collector system, which generally have more desirable properties for grid
interconnection and have Low voltage ride through-capabilities.[37] Modern concepts use either doubly
fed machines with partial-scale converters or squirrel-cage induction generators or synchronous
generators (both permanently and electrically excited) with full scale converters. [38]
Transmission systems operators will supply a wind farm developer with a grid code to specify the
requirements for interconnection to the transmission grid. This will include power factor, constancy of
frequency and dynamic behaviour of the wind farm turbines during a system fault.

Wind power capacity and production


Worldwide there are now over two hundred thousand wind turbines operating, with a total nameplate
capacity of 432,000 MW as of end 2015.[56] The European Union alone passed some 100,000 MW
nameplate capacity in September 2012,[57] while the United States surpassed 75,000 MW in 2015 and
China's grid connected capacity passed 145,000 MW in 2015.[56]
World wind generation capacity more than quadrupled between 2000 and 2006, doubling about every
three years. The United States pioneered wind farms and led the world in installed capacity in the
1980s and into the 1990s. In 1997 installed capacity in Germany surpassed the U.S. and led until once
again overtaken by the U.S. in 2008. China has been rapidly expanding its wind installations in the
late 2000s and passed the U.S. in 2010 to become the world leader. As of 2011, 83 countries around
the world were using wind power on a commercial basis. [16]
Wind power capacity has expanded rapidly to 336 GW in June 2014, and wind energy production was
around 4% of total worldwide electricity usage, and growing rapidly.[18] The actual amount of
electricity that wind is able to generate is calculated by multiplying the nameplate capacity by the
capacity factor, which varies according to equipment and location. Estimates of the capacity factors
for wind installations are in the range of 35% to 44%. [58]
Europe accounted for 48% of the world total wind power generation capacity in 2009. In 2010, Spain
became Europe's leading producer of wind energy, achieving 42,976 GWh. Germany held the top spot
in Europe in terms of installed capacity, with a total of 27,215 MW as of 31 December 2010. [59] In
2015 wind power constituted 15.6% of all installed power generation capacity in the EU and it
generates around 11.4% of its power
Environmental effects
The environmental impact of wind power when compared to the environmental impacts of fossil
fuels, is relatively minor. According to the IPCC, in assessments of the life-cycle global warming
potential of energy sources, wind turbines have a median value of between 12 and 11 (gCO2eq/kWh)
depending on whether off- or onshore turbines are being assessed. [172][173] Compared with other low
carbon power sources, wind turbines have some of the lowest global warming potential per unit of
electrical energy generated.[174]

While a wind farm may cover a large area of land, many land uses such as agriculture are compatible
with it, as only small areas of turbine foundations and infrastructure are made unavailable for use. [175]
[176]

There are reports of bird and bat mortality at wind turbines as there are around other artificial
structures. The scale of the ecological impact may[177] or may not[178] be significant, depending on
specific circumstances. Prevention and mitigation of wildlife fatalities, and protection of peat bogs,[179]
affect the siting and operation of wind turbines.
Wind turbines generate some noise. At a residential distance of 300 metres (980 ft) this may be around
45 dB, which is slightly louder than a refrigerator. At 1.5 km (1 mi) distance they become inaudible.
[180][181]
There are anecdotal reports of negative health effects from noise on people who live very close
to wind turbines.[182] Peer-reviewed research has generally not supported these claims. [183][184][185]
The United States Air Force and Navy have expressed concern that siting large wind turbines near
bases "will negatively impact radar to the point that air traffic controllers will lose the location of
aircraft."[186]
Aesthetic aspects of wind turbines and resulting changes of the visual landscape are significant. [187]
Conflicts arise especially in scenic and heritage protected landscapes.
Wind energy
Distribution of wind speed (red) and energy (blue) for all of 2002 at the Lee Ranch facility
in Colorado. The histogram shows measured data, while the curve is the Rayleigh model
distribution for the same average wind speed.

Wind energy is the kinetic energy of air in motion, also called wind. Total wind energy flowing
through an imaginary surface with area A during the time t is:
here is the density of air; v is the wind speed; Avt is the volume of air passing
through A (which is considered perpendicular to the direction of the wind); Avt is
therefore the mass m passing through "A". Note that v2 is the kinetic energy of
the moving air per unit volume.

Power is energy per unit time, so the wind power incident on A (e.g. equal to the rotor area of a wind
turbine) is:
Wind power in an open air stream is thus proportional to the third power of the
wind speed; the available power increases eightfold when the wind speed
doubles. Wind turbines for grid electricity therefore need to be especially efficient
at greater wind speeds.

Wind is the movement of air across the surface of the Earth, affected by areas of high pressure and of
low pressure.[247] The global wind kinetic energy averaged approximately 1.50 MJ/m2 over the period
from 1979 to 2010, 1.31 MJ/m2 in the Northern Hemisphere with 1.70 MJ/m2 in the Southern
Hemisphere. The atmosphere acts as a thermal engine, absorbing heat at higher temperatures,
releasing heat at lower temperatures. The process is responsible for production of wind kinetic energy
at a rate of 2.46 W/m2 sustaining thus the circulation of the atmosphere against frictional dissipation.

[248]

A global 1 km2 map of wind resources is housed at http://irena.masdar.ac.ae/?map=103 , based on


calculations by the Technical University of Denmark.[249][250][251]
The total amount of economically extractable power available from the wind is considerably more
than present human power use from all sources. [252] Axel Kleidon of the Max Planck Institute in
Germany, carried out a "top down" calculation on how much wind energy there is, starting with the
incoming solar radiation that drives the winds by creating temperature differences in the atmosphere.
He concluded that somewhere between 18 TW and 68 TW

Thermoelectric
The thermoelectric effect is the direct conversion of temperature differences to electric voltage and
vice versa. A thermoelectric device creates voltage when there is a different temperature on each side.
Conversely, when a voltage is applied to it, it creates a temperature difference. At the atomic scale, an
applied temperature gradient causes charge carriers in the material to diffuse from the hot side to the
cold side.
This effect can be used to generate electricity, measure temperature or change the temperature of
objects. Because the direction of heating and cooling is determined by the polarity of the applied
voltage, thermoelectric devices can be used as temperature controllers.
The term "thermoelectric effect" encompasses three separately identified effects: the Seebeck effect,
Peltier effect, and Thomson effect. Textbooks may refer to it as the PeltierSeebeck effect. This
separation derives from the independent discoveries of French physicist Jean Charles Athanase Peltier
and Baltic German physicist Thomas Johann Seebeck. Joule heating, the heat that is generated
whenever a current is passed through a resistive material, is related, though it is not generally termed
as thermoelectric effect. The PeltierSeebeck and Thomson effects are thermodynamically reversible,
[1]
whereas Joule heating is not.
See beck effect
The Seebeck effect is the conversion of heat directly into electricity at the junction of different types
of wire. It is named for the Baltic German physicist Thomas Johann Seebeck, who in 1821 discovered
that a compass needle would be deflected by a closed loop formed by two different metals joined in
two places, with a temperature difference between the joints. This was because the electron energy
levels in each metal shifted differently and a voltage difference between the junctions created an
electrical current and therefore a magnetic field around the wires. Seebeck did not recognize there was
an electric current involved, so he called the phenomenon "thermomagnetic effect." Danish physicist
Hans Christian rsted rectified the oversight and coined the term "thermoelectricity".
The Seebeck effect is a classic example of an electromotive force (emf) and leads to measurable
currents or voltages in the same way as any other emf. Electromotive forces modify Ohm's law by
generating currents even in the absence of voltage differences (or vice versa); the local current density
is given by references (or vice versa) where is the local voltage[2] and is the local conductivity. In
general, the Seebeck effect is described locally by the creation of an electromotive field where is the
Seebeck coefficient (also known as thermopower), a property of the local material, and is the gradient
in temperature .

The Seebeck coefficients generally vary as function of temperature, and depend strongly on the
composition of the conductor. For ordinary materials at room temperature, the Seebeck coefficient
may range in value from 100 V/K to +1,000 V/K (see Seebeck coefficient article for more
information).
If the system reaches a steady state where , then the voltage gradient is given simply by the emf: .
This simple relationship, which does not depend on conductivity, is used in the thermocouple to
measure a temperature difference; an absolute temperature may be found by performing the voltage
measurement at a known reference temperature. A metal of unknown composition can be classified by
its thermoelectric effect if a metallic probe of known composition is kept at a constant temperature
and held in contact with the unknown sample that is locally heated to the probe temperature. It is used
commercially to identify metal alloys. Thermocouples in series form a thermopile. Thermoelectric
generators are used for creating power from heat differentials.

Thermoelectric generators
The Seebeck effect is used in thermoelectric generators, which function like heat engines, but are less
bulky, have no moving parts, and are typically more expensive and less efficient. They have a use in
power plants for converting waste heat into additional electrical power (a form of energy recycling)
and in automobiles as automotive thermoelectric generators (ATGs) for increasing fuel efficiency.
Space probes often use radioisotope thermoelectric generators with the same mechanism but using
radioisotopes to generate the required heat difference. Recent uses include body-heat--powered
lighting [5]

Peltier effect
The Peltier effect can be used to create a refrigerator that is compact and has no circulating fluid or
moving parts. Such refrigerators are useful in applications where their advantages outweigh the
disadvantage of their very low efficiency.

Temperature measurement
Thermocouples and thermopiles are devices that use the Seebeck effect to measure the temperature
difference between two objects. Thermocouples are often used to measure high temperatures, holding
the temperature of one junction constant or measuring it independently (cold junction compensation).
Thermopiles use many thermocouples electrically connected in series, for sensitive measurements of
very small temperature difference.

Thermal cyclers for polymerase chain reaction


The Peltier effect is used by many thermal cyclers, laboratory devices used to amplify DNA by the
polymerase chain reaction (PCR). PCR requires the cyclic heating and cooling of samples to specified
temperatures. The inclusion of many thermocouples in a small space enables many samples to be
amplified in parallel.
Photoelectric effect
The photoelectric effect requires photons with energies approaching zero (in the case of negative
electron affinity) to over 1 MeV for core electrons in elements with a high atomic number. Emission
of conduction electrons from typical metals usually requires a few electron-volts, corresponding to
short-wavelength visible or ultraviolet light. Study of the photoelectric effect led to important steps in

understanding the quantum nature of light and electrons and influenced the formation of the concept
of waveparticle duality.[1] Other phenomena where light affects the movement of electric charges
include the photoconductive effect (also known as photoconductivity or photoresistivity), the
photovoltaic effect, and the photoelectrochemical effect.
Photoemission can occur from any material, but it is most easily observable from metals or other
conductors because the process produces a charge imbalance, and if this charge imbalance is not
neutralized by current flow (enabled by conductivity), the potential barrier to emission increases until
the emission current ceases. It is also usual to have the emitting surface in a vacuum, since gases
impede the flow of photoelectrons and make them difficult to observe. Additionally, the energy barrier
to photoemission is usually increased by thin oxide layers on metal surfaces if the metal has been
exposed to oxygen, so most practical experiments and devices based on the photoelectric effect use
clean metal surfaces in a vacuum.

Emission mechanism
The photons of a light beam have a characteristic energy proportional to the frequency of the light. In
the photoemission process, if an electron within some material absorbs the energy of one photon and
acquires more energy than the work function (the electron binding energy) of the material, it is
ejected. If the photon energy is too low, the electron is unable to escape the material. Since an increase
in the intensity of low-frequency light will only increase the number of low-energy photons sent over
a given interval of time, this change in intensity will not create any single photon with enough energy
to dislodge an electron. Thus, the energy of the emitted electrons does not depend on the intensity of
the incoming light, but only on the energy (equivalently frequency) of the individual photons. It is an
interaction between the incident photon and the outermost electrons.
Electrons can absorb energy from photons when irradiated, but they usually follow an "all or nothing"
principle. All of the energy from one photon must be absorbed and used to liberate one electron from
atomic binding, or else the energy is re-emitted. If the photon energy is absorbed, some of the energy
liberates the electron from the atom, and the rest contributes to the electron's kinetic energy as a free
particle.[6][7][8]

Experimental observations of photoelectric emission


The theory of the photoelectric effect must explain the experimental observations of the emission of
electrons from an illuminated metal surface.
For a given metal, there exists a certain minimum frequency of incident radiation below which no
photoelectrons are emitted. This frequency is called the threshold frequency. Increasing the frequency
of the incident beam, keeping the number of incident photons fixed (this would result in a
proportionate increase in energy) increases the maximum kinetic energy of the photoelectrons
emitted. Thus the stopping voltage increases. The number of electrons also changes because the
probability that each photon results in an emitted electron is a function of photon energy. If the
intensity of the incident radiation of a given frequency is increased, there is no effect on the kinetic
energy of each photoelectron.

Above the threshold frequency, the maximum kinetic energy of the emitted photoelectron depends on
the frequency of the incident light, but is independent of the intensity of the incident light so long as
the latter is not too high.[9]
For a given metal and frequency of incident radiation, the rate at which photoelectrons are ejected is
directly proportional to the intensity of the incident light. An increase in the intensity of the incident
beam (keeping the frequency fixed) increases the magnitude of the photoelectric current, although the
stopping voltage remains the same.
The time lag between the incidence of radiation and the emission of a photoelectron is very small, less
than 109 second.
The direction of distribution of emitted electrons peaks in the direction of polarization (the direction
of the electric field) of the incident light, if it is linearly polarized

Stopping potential
The relation between current and applied voltage illustrates the nature of the photoelectric effect. For
discussion, a light source illuminates a plate P, and another plate electrode Q collects any emitted
electrons. We vary the potential between P and Q and measure the current flowing in the external
circuit between the two plates.
Work function and cut off frequency
If the frequency and the intensity of the incident radiation are fixed, the photoelectric current
increases gradually with an increase in the positive potential on the collector electrode until all the
photoelectrons emitted are collected. The photoelectric current attains a saturation value and does not
increase further for any increase in the positive potential. The saturation current increases with the
increase of the light intensity. It also increases with greater frequencies due to a greater probability of
electron emission when collisions happen with higher energy photons.
If we apply a negative potential to the collector plate Q with respect to the plate P and gradually
increase it, the photoelectric current decreases, becoming zero at a certain negative potential. The
negative potential on the collector at which the photoelectric current becomes zero is called the
stopping potential or cut off potential

Uses and effects


Photomultipliers
Main article: Photomultiplier
These are extremely light-sensitive vacuum tubes with a photocathode coated onto part (an end or
side) of the inside of the envelope. The photocathode contains combinations of materials such as
caesium, rubidium and antimony specially selected to provide a low work function, so when
illuminated even by very low levels of light, the photocathode readily releases electrons. By means of
a series of electrodes (dynodes) at ever-higher potentials, these electrons are accelerated and
substantially increased in number through secondary emission to provide a readily detectable output
current. Photomultipliers are still commonly used wherever low levels of light must be detected. [56]

Image sensors
Video camera tubes in the early days of television used the photoelectric effect, for example, Philo
Farnsworth's "Image dissector" used a screen charged by the photoelectric effect to transform an
optical image into a scanned electronic signal.

Gold-leaf electroscope
Gold-leaf electroscopes are designed to detect static electricity. Charge placed on the metal cap
spreads to the stem and the gold leaf of the electroscope. Because they then have the same charge, the
stem and leaf repel each other. This will cause the leaf to bend away from the stem.
The electroscope is an important tool in illustrating the photoelectric effect. For example, if the
electroscope is negatively charged throughout, there is an excess of electrons and the leaf is separated
from the stem. If high-frequency light shines on the cap, the electroscope discharges and the leaf will
fall limp. This is because the frequency of the light shining on the cap is above the cap's threshold
frequency. The photons in the light have enough energy to liberate electrons from the cap, reducing its
negative charge. This will discharge a negatively charged electroscope and further charge a positive
electroscope. However, if the electromagnetic radiation hitting the metal cap does not have a high
enough frequency (its frequency is below the threshold value for the cap), then the leaf will never
discharge, no matter how long one shines the low-frequency light at the cap. [58]:389390

Photoelectron spectroscopy
Since the energy of the photoelectrons emitted is exactly the energy of the incident photon minus the
material's work function or binding energy, the work function of a sample can be determined by
bombarding it with a monochromatic X-ray source or UV source, and measuring the kinetic energy
distribution of the electrons emitted.[14]:1420
Photoelectron spectroscopy is usually done in a high-vacuum environment, since the electrons would
be scattered by gas molecules if they were present. However, some companies are now selling
products that allow photoemission in air. The light source can be a laser, a discharge tube, or a
synchrotron radiation source.[59]
The concentric hemispherical analyser (CHA) is a typical electron energy analyzer, and uses an
electric field to change the directions of incident electrons, depending on their kinetic energies. For
every element and core (atomic orbital) there will be a different binding energy. The many electrons
created from each of these combinations will show up as spikes in the analyzer output, and these can
be used to determine the elemental composition of the sample.

Spacecraft
The photoelectric effect will cause spacecraft exposed to sunlight to develop a positive charge. This
can be a major problem, as other parts of the spacecraft in shadow develop a negative charge from
nearby plasma, and the imbalance can discharge through delicate electrical components. The static
charge created by the photoelectric effect is self-limiting, though, because a more highly charged
object gives up its electrons less easily.[60][61]

Moon dust
Light from the sun hitting lunar dust causes it to become charged through the photoelectric effect. The
charged dust then repels itself and lifts off the surface of the Moon by electrostatic levitation.[62][63]
This manifests itself almost like an "atmosphere of dust", visible as a thin haze and blurring of distant
features, and visible as a dim glow after the sun has set. This was first photographed by the Surveyor
program probes in the 1960s. It is thought that the smallest particles are repelled up to kilometers
high, and that the particles move in "fountains" as they charge and discharge.

Night vision devices

Photons hitting a thin film of alkali metal or semiconductor material such as gallium arsenide in an
image intensifier tube cause the ejection of photoelectrons due to the photoelectric effect. These are
accelerated by an electrostatic field where they strike a phosphor coated screen, converting the
electrons back into photons. Intensification of the signal is achieved either through acceleration of the
electrons or by increasing the number of electrons through secondary emissions, such as with a microchannel plate. Sometimes a combination of both methods is used. Additional kinetic energy is
required to move an electron out of the conduction band and into the vacuum level. This is known as
the electron affinity of the photocathode and is another barrier to photoemission other than the
forbidden band, explained by the band gap model. Some materials such as Gallium Arsenide have an
effective electron affinity that is below the level of the conduction band. In these materials, electrons
that move to the conduction band are all of sufficient energy to be emitted from the material and as
such, the film that absorbs photons can be quite thick. These materials are known as negative electron
affinity materials.
Thermoionic
Thermionic emission is the thermally induced flow of charge carriers from a surface or over a
potential-energy barrier. This occurs because the thermal energy given to the carrier overcomes the
work function of the material. The charge carriers can be electrons or ions, and in older literature are
sometimes referred to as "thermions". After emission, a charge that is equal in magnitude and opposite
in sign to the total charge emitted is initially left behind in the emitting region. But if the emitter is
connected to a battery, the charge left behind is neutralized by charge supplied by the battery as the
emitted charge carriers move away from the emitter, and finally the emitter will be in the same state as
it was before emission.
The classical example of thermionic emission is the emission of electrons from a hot cathode into a
vacuum (also known as thermal electron emission or the Edison effect) in a vacuum tube. The hot
cathode can be a metal filament, a coated metal filament, or a separate structure of metal or carbides
or borides of transition metals. Vacuum emission from metals tends to become significant only for
temperatures over 1,000 K (730 C; 1,340 F).
The term "thermionic emission" is now also used to refer to any thermally-excited charge emission
process, even when the charge is emitted from one solid-state region into another. This process is
crucially important in the operation of a variety of electronic devices and can be used for electricity
generation (such as thermionic converters and electrodynamic tethers) or cooling. The magnitude of
the charge flow increases dramatically with increasing temperature.

Richardson's law
Following J. J. Thomson's identification of the electron in 1897, the British physicist Owen Willans
Richardson began work on the topic that he later called "thermionic emission". He received a Nobel
Prize in Physics in 1928 "for his work on the thermionic phenomenon and especially for the discovery
of the law named after him".
From band theory, there are one or two electrons per atom in a solid that are free to move from atom
to atom. This is sometimes collectively referred to as a "sea of electrons". Their velocities follow a
statistical distribution, rather than being uniform, and occasionally an electron will have enough
velocity to exit the metal without being pulled back in. The minimum amount of energy needed for an
electron to leave a surface is called the work function. The work function is characteristic of the
material and for most metals is on the order of several electronvolts. Thermionic currents can be

increased by decreasing the work function. This often-desired goal can be achieved by applying
various oxide coatings to the wire.
In 1901 Richardson published the results of his experiments: the current from a heated wire seemed to
depend exponentially on the temperature of the wire with a mathematical form similar to the
Arrhenius equation.[12] Later, he proposed that the emission law should have the mathematical form.

Schottky emission

In electron emission devices, especially electron guns, the thermionic electron emitter
will be biased negative relative to its surroundings. This creates an electric field of
magnitude F at the emitter surface. Without the field, the surface barrier seen by an
escaping Fermi-level electron has height W equal to the local work-function. The electric
field lowers the surface barrier by an amount W, and increases the emission current.
This is known as the Schottky effect (named for Walter H. Schottky) or field enhanced
thermionic emission. It can be modeled by a simple modification of the Richardson
equation, by replacing W by (W W).

Photon-enhanced thermionic emission


Photon-enhanced thermionic emission (PETE) is a process developed by scientists at
Stanford University that harnesses both the light and heat of the sun to generate
electricity and increases the efficiency of solar power production by more than twice the
current levels. The device developed for the process reaches peak efficiency above
200 C, while most silicon solar cells become inert after reaching 100 C. Such devices
work best in parabolic dish collectors, which reach temperatures up to 800 C. Although
the team used a gallium nitride semiconductor in its proof-of-concept device, it claims
that the use of gallium arsenide can increase the device's efficiency to 5560 percent,
nearly triple that of existing systems,[19][20] and 1217 percent more than existing 43
percent multi-junction solar cells.

Thermionic emission from graphene


Compared with the traditional bulk metal or semiconductor materials, graphene has
many unique and excellent properties, such as atomic layer thickness, linear band
structure near Dirac cone, ultrahigh Fermi velocity, etc., making traditional Richardson's
law derived for bulk materials invalid for graphene. Researchers from Singapore
University of Technology and Design have proposed that there exists a new scaling of
thermionic emission from a single-layer graphene which has been verified with an
experiment.[22] They also proposed that graphene-based vacuum thermionic energy
converter has an efficiency of about 45%, with cathode temperature ranging from 700 K
to 900 K, which is greatly reduced compared to the traditional thermionic converter of
above 1200 K and has a promising application in recycling the waste heat from industrial
processes.

Magnetohydrodynamics (MHD)
Magnetohydrodynamics (MHD; also magneto fluid dynamics or hydromagnetics) is the study of the
magnetic properties of electrically conducting fluids. Examples of such magneto-fluids include
plasmas, liquid metals, and salt water or electrolytes. The word magnetohydrodynamics (MHD) is
derived from magneto- meaning magnetic field, hydro- meaning water, and -dynamics meaning
movement. The field of MHD was initiated by Hannes Alfvn,[1] for which he received the Nobel
Prize in Physics in 1970.
The fundamental concept behind MHD is that magnetic fields can induce currents in a moving
conductive fluid, which in turn polarizes the fluid and reciprocally changes the magnetic field itself.
The set of equations that describe MHD are a combination of the Navier-Stokes equations of fluid

dynamics and Maxwell's equations of electromagnetism. These differential equations must be solved
simultaneously, either analytically or numerically.

Ideal and resistive MHD


The simplest form of MHD, Ideal MHD, assumes that the fluid has so little resistivity that it can be
treated as a perfect conductor. This is the limit of infinite magnetic Reynolds number. In ideal MHD,
Lenz's law dictates that the fluid is in a sense tied to the magnetic field lines. To explain, in ideal
MHD a small rope-like volume of fluid surrounding a field line will continue to lie along a magnetic
field line, even as it is twisted and distorted by fluid flows in the system. This is sometimes referred to
as the magnetic field lines being "frozen" in the fluid. [5] The connection between magnetic field lines
and fluid in ideal MHD fixes the topology of the magnetic field in the fluidfor example, if a set of
magnetic field lines are tied into a knot, then they will remain so as long as the fluid/plasma has
negligible resistivity. This difficulty in reconnecting magnetic field lines makes it possible to store
energy by moving the fluid or the source of the magnetic field. The energy can then become available
if the conditions for ideal MHD break down, allowing magnetic reconnection that releases the stored
energy from the magnetic field.

Applicability of ideal MHD to plasmas


Ideal MHD is only strictly applicable when:
1. The plasma is strongly collisional, so that the time scale of collisions is shorter
than the other characteristic times in the system, and the particle distributions
are therefore close to Maxwellian.
2. The resistivity due to these collisions is small. In particular, the typical magnetic
diffusion times over any scale length present in the system must be longer than
any time scale of interest.
3. Interest in length scales much longer than the ion skin depth and Larmor radius
perpendicular to the field, long enough along the field to ignore Landau damping,
and time scales much longer than the ion gyration time (system is smooth and
slowly evolving).

Structures in MHD systems


In many MHD systems most of the electric current is compressed into thin nearly-two-dimensional
ribbons termed current sheets. These can divide the fluid into magnetic domains, inside of which the
currents are relatively weak. Current sheets in the solar corona are thought to be between a few meters
and a few kilometers in thickness, which is quite thin compared to the magnetic domains (which are
thousands to hundreds of thousands of kilometers across). Another example is in the Earth's
magnetosphere, where current sheets separate topologically distinct domains, isolating most of the
Earth's ionosphere from the solar wind.
A solar cell, or photovoltaic cell (previously termed "solar battery"[1]), is an electrical device that
converts the energy of light directly into electricity by the photovoltaic effect, which is a physical and
chemical phenomenon.[2] It is a form of photoelectric cell, defined as a device whose electrical
characteristics, such as current, voltage, or resistance, vary when exposed to light. Solar cells are the
building blocks of photovoltaic modules, otherwise known as solar panels.

Solar cell/Photovoltaics
Solar cells are described as being photovoltaic irrespective of whether the source is sunlight or an
artificial light. They are used as a photodetector (for example infrared detectors), detecting light or
other electromagnetic radiation near the visible range, or measuring light intensity.
The operation of a photovoltaic (PV) cell requires 3 basic attributes:

The absorption of light, generating either electron-hole pairs or excitons.

The separation of charge carriers of opposite types.

The separate extraction of those carriers to an external circuit.

In contrast, a solar thermal collector supplies heat by absorbing sunlight, for the purpose of either
direct heating or indirect electrical power generation from heat. A "photoelectrolytic cell"
(photoelectrochemical cell), on the other hand, refers either to a type of photovoltaic cell (like that
developed by Edmond Becquerel and modern dye-sensitized solar cells), or to a device that splits
water directly into hydrogen and oxygen using only solar illumination.

Applications
From a solar cell to a PV system. Diagram of the possible components of a photovoltaic
system

Assemblies of solar cells are used to make solar modules which generate electrical power from
sunlight, as distinguished from a "solar thermal module" or "solar hot water panel". A solar array
generates solar power using solar energy.

Cells, modules, panels and systems


Multiple solar cells in an integrated group, all oriented in one plane, constitute a solar photovoltaic
panel or solar photovoltaic module. Photovoltaic modules often have a sheet of glass on the sunfacing side, allowing light to pass while protecting the semiconductor wafers. Solar cells are usually
connected in series and parallel circuits or series in modules, creating an additive voltage. Connecting
cells in parallel yields a higher current; however, problems such as shadow effects can shut down the
weaker (less illuminated) parallel string (a number of series connected cells) causing substantial
power loss and possible damage because of the reverse bias applied to the shadowed cells by their
illuminated partners. Strings of series cells are usually handled independently and not connected in
parallel, though as of 2014, individual power boxes are often supplied for each module, and are
connected in parallel. Although modules can be interconnected to create an array with the desired
peak DC voltage and loading current capacity, using independent MPPTs (maximum power point
trackers) is preferable. Otherwise, shunt diodes can reduce shadowing power loss in arrays with
series/parallel connected cells.

Efficiency
solar cell efficiency may be broken down into reflectance efficiency, thermodynamic efficiency,
charge carrier separation efficiency and conductive efficiency. The overall efficiency is the product of
these individual metrics.

A solar cell has a voltage dependent efficiency curve, temperature coefficients, and allowable shadow
angles.
Due to the difficulty in measuring these parameters directly, other parameters are substituted:
thermodynamic efficiency, quantum efficiency, integrated quantum efficiency, VOC ratio, and fill
factor. Reflectance losses are a portion of quantum efficiency under "external quantum efficiency".
Recombination losses make up another portion of quantum efficiency, VOC ratio, and fill factor.
Resistive losses are predominantly categorized under fill factor, but also make up minor portions of
quantum efficiency, VOC ratio.
The fill factor is the ratio of the actual maximum obtainable power to the product of the open circuit
voltage and short circuit current. This is a key parameter in evaluating performance. In 2009, typical
commercial solar cells had a fill factor > 0.70. Grade B cells were usually between 0.4 and 0.7. [32]
Cells with a high fill factor have a low equivalent series resistance and a high equivalent shunt
resistance, so less of the current produced by the cell is dissipated in internal losses.
Single pn junction crystalline silicon devices are now approaching the theoretical limiting power
efficiency of 33.7%, noted as the ShockleyQueisser limit in 1961. In the extreme, with an infinite
number of layers, the corresponding limit is 86% using concentrated sunlight. [33]
In 2014, three companies broke the record of 25.6% for a silicon solar cell. Panasonic's was the most
efficient. The company moved the front contacts to the rear of the panel, eliminating shaded areas. In
addition they applied thin silicon films to the (high quality silicon) wafer's front and back to eliminate
defects at or near the wafer surface. [34]
In 2015, a 4-junction GaInP/GaAs//GaInAsP/GaInAs solar cell achieved a new laboratory record
efficiency of 46.1 percent (concentration ratio of sunlight = 312) in a French-German collaboration
between the Fraunhofer Institute for Solar Energy Systems (Fraunhofer ISE), CEA-LETI and
SOITEC.[35]
In September 2015, Fraunhofer ISE announced the achievement of an efficiency above 20% for
epitaxial wafer cells. The work on optimizing the atmospheric-pressure chemical vapor deposition
(APCVD) in-line production chain was done in collaboration with NexWafe GmbH, a company spun
off from Fraunhofer ISE to commercialize production.

Ocean thermal energy conversion


Ocean Thermal Energy Conversion (OTEC) is a clean, zero-emission and renewable energy
technology. OTEC takes the heat from tropical oceans and converts it to electricity. OTEC is capable
of generating electricity day and night, throughout the year, providing a reliable source of electricity.
Although still largely untapped, OTEC is one of the worlds largest renewable energy resources and is
available to around 100 countries within their nautical economical zone.

Potential
The ocean comprises an enormous energy source. Covering almost two-third of the surface of the
earth, the ocean captures 70% of the solar energy that irradiates on earth. It is estimated that the solar
energy that is absorbed by the ocean per year, exceeds the human energy consumption more than 4000

times.[2] Recent research concludes that there is between 7 and 30 terawatt of electric potential energy
available without having an adverse impact on natural thermal currents and ocean temperatures, [3] this
equals three to ten times the global electricity demand. This vast resource has been recognized
worldwide in recent reports from the International Institute for Applied Systems Analysis (IIASA)[4]
and the International Panel on Climate Change.[5]
An advantage of OTEC compared to other renewable energy sources is the reliable and predictable
energy production. Since tropical oceans hardly encounter fluctuations in their surface temperature
(neither per day, nor per season), the temperature difference between the various oceanic layers
remains nearly constant. This enables OTEC to provide a base-load electricity supply with a capacity
factor of 80% - 100%.[1]

Oceanic layered structure


The oceans cover almost two-third of the surface of the earth and capture a majority of the solar
energy that reaches the earth.[4] Especially in tropical regions, solar energy is absorbed by the ocean
and stored as heat. The balance between the incoming solar radiation and the heat loss due to
evaporation and convection accounts for a constant temperature of the oceanic surface water. [6]
As depth increases, the ocean water becomes colder, due to the accumulation of ice-cold water that
has melted from the polar regions.[7] Because of its higher density, this cold water flows along the
bottom of the ocean from the poles towards the equator, displacing the lower-density water above.
These two phenomena provide for a layered oceanic structure in deep, tropical oceans with a reservoir
of warm water at the surface and a reservoir of cold water deeper in the ocean. [8] The temperature
difference between these layers varies between 22 C and 25 C.[6] Until today this vast sustainable
energy reservoir remains largely unused.

Working principle
The OTEC system is based on an organic Rankine cycle; a working fluid with a lower boiling point
and a higher vapour pressure than water is used to power a turbine that generates electricity. First,
warm water from the ocean surface is pumped through a heat exchanger. In the heat exchanger, the
heat that is exchanged from the seawater to the working fluid causes the working fluid to vaporize.
This vaporized working fluid is compressed in a turbine that is connected to a generator that generates
electricity. Thereafter, cold seawater, pumped through a second heat exchanger, condenses the vapour
into a liquid, so it can be reused. An electricity-generating cycle is therefore created.
OTEC working principle
Working fluids

Effective electricity generation with OTEC requires a working fluid with a lower boiling point and a
higher vapour pressure than water. A typical choice of working fluid is ammonia, which has superior
transport properties and is easily available at low cost. Also, the extensive operational experience with
ammonia in refrigeration systems and its proven safety record make it the preferred choice of various
working fluids, such as propane and other refrigerants. The working fluid is contained in a closed
system, at relative low operating pressures and temperatures. Much lower than in for instance fossil

fuel or nuclear power plants. Nonetheless, sealing of the components that contain the working fluid
needs to be taken care of, but reliable solutions are readily available.
Efficiency
In line with the Carnot efficiency, a heat engine gives greater efficiency when run with a large
temperature difference. The temperature difference between the surface and deep water of the ocean is
greatest in the tropics, although still a modest 20 to 25 C.[9] It is therefore in the tropics that OTEC
offers the greatest possibilities. The energy consumption of an OTEC cycle is dominated by the
seawater pumps. These pumps and other auxiliary equipment consume roughly 20% of the total
electricity produced. The remaining 80% is net power and can then be supplied to the grid. [6]

Cost and economics


Because OTEC systems have not been widely deployed yet, cost estimates are uncertain. The current
Levelized Cost of Energy (LCOE) of OTEC is estimated at about $0.20 -$0.25 per kWh, with
significant room for cost reductions to even lower than $0.07 per kWh once the systems become more
mature.[5][10] The LCOE of OTEC is predominantly determined by the initial capital investment. [5] The
pipes to transport the deep seawater and the heat exchangers form one of the largest capital
investments of OTEC systems.[11] Annual operation and maintenance cost are about 1% of the capital
investment.

Evironmental impact
The energy produced by OTEC is clean, zero-emission and renewable. [6] It will drastically reduce
emissions and make energy available from an inexhaustible natural resource. A life cycle analysis of a
OTEC plant, assuming currently available technology, resulted in a global warming potential of
OTEC that is at most 3% of diesel generated electricity and its energy payback is within 1 to 2 years.
[12]
It is anticipated that this figure will be further improved by improving technology.
OTEC requires seawater flow rates of several cubic meters per second per net megawatt of electricity
produced. Though substantial, these flow rates are negligible compared to normal ocean currents with
flow rates of many million cubic meters per second. By selecting the right location for the seawater
intakes and the size of mesh for the intake filters, the possible entrainment of organisms is minimized.
[13]
Generally speaking, the problem can often be reduced by placing the seawater intake further from
the shore while avoiding submarine canyons, coral reefs or areas with fast ocean currents.
The seawater coming out of the OTEC plant is returned to a level in the ocean with approximately the
same temperature and below the photic zone.[14] The latter ensures that the discharge plume with
nutrient-rich deep seawater doesnt trigger biological growth. [15] The exact siting of the discharge pipe
will vary according to currents and temperatures at the specific location. It is typically around several
tens to two hundred meter deep.
Most recently, NOAA held an OTEC Workshop in 2010 and 2012, seeking to assess the physical,
chemical, and biological impacts and risks of OTEC, and to identify information gaps or needs [16] .[17]
Todays available environmental modelling tools, sensors and monitoring techniques greatly help in
analysing and monitoring impact at specific locations. The Tethys database provides access to
scientific literature and general information on the potential environmental effects of OTEC.

Geothermal springs
Geothermal power is power generated by geothermal energy. Technologies in use include dry steam
power stations, flash steam power stations and binary cycle power stations. Geothermal electricity
generation is currently used in 24 countries,[1] while geothermal heating is in use in 70 countries.[2]
As of 2015, worldwide geothermal power capacity amounts to 12.8 gigawatts (GW), of which 28
percent or 3,548 megawatts are installed in the United States. International markets grew at an
average annual rate of 5 percent over the last three years and global geothermal power capacity is
expected to reach 14.517.6 GW by 2020.[3] Based on current geologic knowledge and technology,
the Geothermal Energy Association (GEA) estimates that only 6.5 percent of total global potential has
been tapped so far, while the IPCC reported geothermal power potential to be in the range of 35 GW
to 2 TW.[2] Countries generating more than 15 percent of their electricity from geothermal sources
include El Salvador, Kenya, the Philippines, Iceland and Costa Rica.
Geothermal power is considered to be a sustainable, renewable source of energy because the heat
extraction is small compared with the Earth's heat content.[4] The greenhouse gas emissions of
geothermal electric stations are on average 45 grams of carbon dioxide per kilowatt-hour of
electricity, or less than 5 percent of that of conventional coal-fired plants.

Power station types


Dry steam power stations
Dry steam stations are the simplest and oldest design. They directly use geothermal steam of 150 C
or greater to turn turbines.[2]
Flash steam power stations
Flash steam stations pull deep, high-pressure hot water into lower-pressure tanks and use the resulting
flashed steam to drive turbines. They require fluid temperatures of at least 180 C, usually more. This
is the most common type of station in operation today. Flash Steam plants use geothermal reservoirs
of water with temperatures greater than 360 F (182). The hot water flows up through wells in the
ground under its own pressure. As it flows upward, the pressure decreases and some of the hot water
boils into steam. The steam is then separated from the water and used to power a turbine/generator.
Any leftover water and condensed steam may be injected back into the reservoir, making this a
potentially sustainable resource.[23] [24] At The Geysers in California, twenty years of power
production had depleted the groundwater and operations were substantially reduced. To restore some
of the former capacity, water injection was developed.[25]
Binary cycle power stations
Binary cycle power stations are the most recent development, and can accept fluid temperatures as
low as 57 C.[11] The moderately hot geothermal water is passed by a secondary fluid with a much
lower boiling point than water. This causes the secondary fluid to flash vaporize, which then drives
the turbines. This is the most common type of geothermal electricity station being constructed today.
[26] Both Organic Rankine and Kalina cycles are used. The thermal efficiency of this type station is
typically about 1013%.

Worldwide production
The International Geothermal Association (IGA) has reported that 10,715 megawatts (MW) of
geothermal power in 24 countries is online, which is expected to generate 67,246 GWh of electricity
in 2010.[1] This represents a 20% increase in geothermal power online capacity since 2005. IGA
projected this would grow to 18,500 MW by 2015, due to the large number of projects that were under
consideration, often in areas previously assumed to have little exploitable resource.[1]
In 2010, the United States led the world in geothermal electricity production with 3,086 MW of
installed capacity from 77 power stations;[27] the largest group of geothermal power plants in the
world is located at The Geysers, a geothermal field in California.[28] The Philippines follows the US
as the second highest producer of geothermal power in the world, with 1,904 MW of capacity online;
geothermal power makes up approximately 27% of the country's electricity generation.[27]
Al Gore said in The Climate Project Asia Pacific Summit that Indonesia could become a super power
country in electricity production from geothermal energy.[29] India has announced a plan to develop
the country's first geothermal power facility in Chhattisgarh.[30]
Canada is the only major country on the Pacific Ring of Fire which has not yet developed geothermal
power. The region of greatest potential is the Canadian Cordillera, stretching from British Columbia
to the Yukon, where estimates of generating output have ranged from 1,550 MW to 5,000 MW.[31]
Utility-grade stations
The largest group of geothermal power plants in the world is located at The Geysers, a geothermal
field in California, United States.[32] As of 2004, five countries (El Salvador, Kenya, the Philippines,
Iceland, and Costa Rica) generate more than 15% of their electricity from geothermal sources.[2]
Geothermal electricity is generated in the 24 countries listed in the table below. During 2005,
contracts were placed for an additional 500 MW of electrical capacity in the United States, while there
were also stations under construction in 11 other countries.[12] Enhanced geothermal systems that are
several kilometres in depth are operational in France and Germany and are being developed or
evaluated in at least four other countries.
Environmental impact
Fluids drawn from the deep earth carry a mixture of gases, notably carbon dioxide (CO
2), hydrogen sulfide (H
2S), methane (CH
4), ammonia (NH
3) and radon (Rn). These pollutants contribute to global warming, acid rain, radiation and noxious
smells if released.
Existing geothermal electric stations, that fall within the 50th percentile of all total life cycle
emissions studies reviewed by the IPCC, produce on average 45 kg of CO
2 equivalent emissions per megawatt-hour of generated electricity (kg CO
2eq/MWh). For comparison, a coal-fired power plant emits 1,001 kg of CO
2 per megawatt-hour when not coupled with carbon capture and storage (CCS).[5]
Stations that experience high levels of acids and volatile chemicals are usually equipped with
emission-control systems to reduce the exhaust. Geothermal stations could theoretically inject these
gases back into the earth, as a form of carbon capture and storage.

In addition to dissolved gases, hot water from geothermal sources may hold in solution trace amounts
of toxic chemicals, such as mercury, arsenic, boron, antimony, and salt.[40] These chemicals come
out of solution as the water cools, and can cause environmental damage if released. The modern
practice of injecting geothermal fluids back into the Earth to stimulate production has the side benefit
of reducing this environmental risk.
Fusion energy
n nuclear physics, nuclear fusion is a nuclear reaction in which two or more atomic nuclei come close
enough to form one or more different atomic nuclei and subatomic particles (neutrons and/or protons).
The difference in mass between the products and reactants is manifested as the release of large
amounts of energy. This difference in mass arises due to the difference in atomic "binding energy"
between the atomic nuclei before and after the reaction. Fusion is the process that powers active or
"main sequence" stars, or other high magnitude stars.
The fusion process that produces a nucleus lighter than iron-56 or nickel-62 will generally yield a net
energy release. These elements have the smallest mass per nucleon and the largest binding energy per
nucleon, respectively. Fusion of light elements toward these releases energy (an exothermic process),
while a fusion producing nuclei heavier than these elements, will result in energy retained by the
resulting nucleons, and the resulting reaction is endothermic. The opposite is true for the reverse
process, nuclear fission. This means that the lighter elements, such as hydrogen and helium, are in
general more fusable; while the heavier elements, such as uranium and plutonium, are more
fissionable. The extreme astrophysical event of a supernova can produce enough energy to fuse nuclei
into elements heavier than iron.
Following the discovery of quantum tunneling by physicist Friedrich Hund, in 1929 Robert Atkinson
and Fritz Houtermans used the measured masses of light elements to predict that large amounts of
energy could be released by fusing small nuclei. Building upon the nuclear transmutation experiments
by Ernest Rutherford, carried out several years earlier, the laboratory fusion of hydrogen isotopes was
first accomplished by Mark Oliphant in 1932. During the remainder of that decade the steps of the
main cycle of nuclear fusion in stars were worked out by Hans Bethe. Research into fusion for
military purposes began in the early 1940s as part of the Manhattan Project. Fusion was accomplished
in 1951 with the Greenhouse Item nuclear test. Nuclear fusion on a large scale in an explosion was
first carried out on November 1, 1952, in the Ivy Mike hydrogen bomb test.
Research into developing controlled thermonuclear fusion for civil purposes also began in earnest in
the 1950s, and it continues to this day.
The Hydrogen Fusion Process
In the basic Hydrogen fusion cycle, four Hydrogen nuclei (protons) come together to make a Helium
nucleus. This is the simple version of the story. There are actually electrons, neutrinos and photons
involved that make the fusion of Hydrogen into Helium possible.
The important thing to remember is that this fusion cycle releases energy in the core of the star. It is
this fusion cycle that generates energy in our Sun. We know of this energy when we feel hot on
Summer days!

This whole process happens in three steps. There are animations of the three steps below to help you
visualize this process!
Nuclear fusion in stars
The most important fusion process in nature is the one that powers stars, stellar nucleosynthesis. In
the 20th century, it was realized that the energy released from nuclear fusion reactions accounted for
the longevity of the Sun and other stars as a source of heat and light. The fusion of nuclei in a star,
starting from its initial hydrogen and helium abundance, provides that energy and synthesizes new
nuclei as a byproduct of the fusion process. The prime energy producer in the Sun is the fusion of
hydrogen to form helium, which occurs at a solar-core temperature of 14 million kelvin. The net result
is the fusion of four protons into one alpha particle, with the release of two positrons, two neutrinos
(which changes two of the protons into neutrons), and energy. Different reaction chains are involved,
depending on the mass of the star. For stars the size of the sun or smaller, the proton-proton chain
dominates. In heavier stars, the CNO cycle is more important.
As a star uses up a substantial fraction of its hydrogen, it begins to synthesize heavier elements.
However the heaviest elements are synthesized by fusion that occurs as a more massive star
undergoes a violent supernova at the end of its life, a process known as supernova nucleosynthesis.
Requirements
A substantial energy barrier of electrostatic forces must be overcome before fusion can occur. At large
distances, two naked nuclei repel one another because of the repulsive electrostatic force between
their positively charged protons. If two nuclei can be brought close enough together, however, the
electrostatic repulsion can be overcome by the quantum effect in which nuclei can tunnel through
columb forces.
When a nucleon such as a proton or neutron is added to a nucleus, the nuclear force attracts it to all
the other nucleons of the nucleus (if the atom is small enough), but primarily to its immediate
neighbours due to the short range of the force. The nucleons in the interior of a nucleus have more
neighboring nucleons than those on the surface. Since smaller nuclei have a larger surface area-tovolume ratio, the binding energy per nucleon due to the nuclear force generally increases with the size
of the nucleus but approaches a limiting value corresponding to that of a nucleus with a diameter of
about four nucleons. It is important to keep in mind that nucleons are quantum objects. So, for
example, since two neutrons in a nucleus are identical to each other, the goal of distinguishing one
from the other, such as which one is in the interior and which is on the surface, is in fact meaningless,
and the inclusion of quantum mechanics is therefore necessary for proper calculations.
The electrostatic force, on the other hand, is an inverse-square force, so a proton added to a nucleus
will feel an electrostatic repulsion from all the other protons in the nucleus. The electrostatic energy
per nucleon due to the electrostatic force thus increases without limit as nuclei atomic number grows.
Methods for achieving fusion

Thermonuclear fusion
If the matter is sufficiently heated (hence being plasma), the fusion reaction may occur due to
collisions with extreme thermal kinetic energies of the particles. In the form of thermonuclear
weapons, thermonuclear fusion is the only fusion technique so far to yield undeniably large amounts
of useful fusion energy. Usable amounts of thermonuclear fusion energy released in a controlled

manner have yet to be achieved. In nature, this is what produces energy in stars through stellar
nucleosynthesis.

Inertial confinement fusion


Inertial confinement fusion (ICF) is a type of fusion energy research that attempts to initiate nuclear
fusion reactions by heating and compressing a fuel target, typically in the form of a pellet that most
often contains a mixture of deuterium and tritium.
Inertial electrostatic confinement
Inertial electrostatic confinement is a set of devices that use an electric field to heat ions to fusion
conditions. The most well known is the fuser. Starting in 1999, a number of amateurs have been able
to do amateur fusion using these homemade devices.[11][12][13][14][15] Other IEC devices include:
the Polywell, MIX POPS[16] and Marble concepts.
Beam-beam or beam-target fusion
If the energy to initiate the reaction comes from accelerating one of the nuclei, the process is called
beam-target fusion; if both nuclei are accelerated, it is beam-beam fusion.
Accelerator-based light-ion fusion is a technique using particle accelerators to achieve particle kinetic
energies sufficient to induce light-ion fusion reactions. Accelerating light ions is relatively easy, and
can be done in an efficient mannerrequiring only a vacuum tube, a pair of electrodes, and a highvoltage transformer; fusion can be observed with as little as 10 kV between the electrodes. The key
problem with accelerator-based fusion (and with cold targets in general) is that fusion cross sections
are many orders of magnitude lower than Coulomb interaction cross sections. Therefore, the vast
majority of ions expend their energy emitting bremsstrahlung radiation and the ionization of atoms of
the target. Devices referred to as sealed-tube neutron generators are particularly relevant to this
discussion. These small devices are miniature particle accelerators filled with deuterium and tritium
gas in an arrangement that allows ions of those nuclei to be accelerated against hydride targets, also
containing deuterium and tritium, where fusion takes place, releasing a flux of neutrons. Hundreds of
neutron generators are produced annually for use in the petroleum industry where they are used in
measurement equipment for locating and mapping oil reserves.
Muon-catalyzed fusion
Muon-catalyzed fusion is a well-established and reproducible fusion process that occurs at ordinary
temperatures. It was studied in detail by Steven Jones in the early 1980s. Net energy production from
this reaction cannot occur because of the high energy required to create muons, their short 2.2 s halflife, and the high chance that a muon will bind to the new alpha particle and thus stop catalyzing
fusion.

You might also like