Science Technology Handout-2 by Dr. Rahul Shankar Lec-13

You might also like

Download as pdf or txt
Download as pdf or txt
You are on page 1of 16

Science & Technology Handout-2 (Class-13)

by Dr. Rahul Shankar


Q.5: Introduce the concept of Artificial Intelligence (AI). How does Al help clinical
diagnosis? Do you perceive any threat to privacy of the individual in the use of Al
in healthcare? (Answer in 150 words)

CONCEPT OF ARTIFICIAL INTELLIGENCE


Artificial intelligence (AI) is a branch of computer science. It involves developing
computer programs to complete tasks which would otherwise require human
intelligence. AI algorithms can tackle learning, perception, problem-solving,
language-understanding and/or logical reasoning.
Machine learning is an area of artificial intelligence (AI) with a concept that a computer
program can learn and adapt to new data without human intervention.
AI in clinical diagnosis is a rapidly advancing field of healthcare where artificial
intelligence (AI) technologies are used to assist healthcare professionals in the process of
diagnosing diseases and medical conditions. It has the potential to significantly enhance
the accuracy and efficiency of diagnosis, improve patient outcomes, and reduce
healthcare costs.
Key aspects of AI in clinical diagnosis:
Medical Imaging: AI algorithms are widely used in the analysis of medical images such as
X-rays, CT scans, MRIs, and pathology slides. These algorithms can help detect
abnormalities, tumors, fractures, and other medical conditions with high accuracy. For
example, AI-powered image recognition can assist radiologists in identifying anomalies
in medical images more quickly
S&T MAINS 2021
2.Early Disease Detection: AI can be used to identify early signs of diseases, allowing for
timely intervention and treatment. For instance, machine learning models can analyze
patient data to predict the likelihood of developing conditions like diabetes, heart disease,
or certain types of cancer.

VAJIRAM & RAVI 1


Key privacy concerns associated with AI in healthcare:

1. Data Security: Healthcare AI relies on vast amounts of sensitive patient data,


including medical records, images, and genetic information. These data must be
stored and transmitted securely to prevent unauthorized access or data
breaches.
2. Data Misuse: There is a risk that AI systems or healthcare providers could misuse
patient data for purposes other than diagnosis and treatment, such as marketing,
insurance, or research without proper consent.
3. De-Identification Challenges: Even when data is anonymized, AI algorithms have
demonstrated the ability to re-identify individuals, potentially compromising
patient privacy.
4. Informed Consent: Patients may not fully understand how their data will be used
in AI applications, raising questions about informed consent. Clear and
transparent consent processes are essential.
5. Algorithm Bias: Biased algorithms can perpetuate existing healthcare disparities,
potentially discriminating against certain demographic groups. Efforts to ensure
fairness and equity in AI algorithms are crucial.

VAJIRAM & RAVI 2


Nipah virus:
A potential pandemic agent in the context of the current severe acute respiratory
syndrome coronavirus 2 pandemic

Nipah virus
Nipah virus (NiV) is a zoonotic virus, meaning that it can spread between animals and
people. Fruit bats, also called flying foxes, are the animal reservoir for NiV in nature.
Nipah virus is also known to cause illness in pigs and people. Infection with NiV is
associated with encephalitis (swelling of the brain) and can cause mild to severe illness
and even death. Outbreaks occur almost annually in parts of Asia, primarily Bangladesh
and India.

Factors are responsible for zoonotic outbreaks


1. intensive livestock farming and agriculture
2. using wildlife as food sources
3. clearing land for farming and grazing
4. human encroachment on wildlife habitats
5. international trading of exotic animals and urbanization
➢ All these lead to more human–animal and environmental interactions .
➢ Spillover of pathogens from an animal reservoir to humans remains a major factor
in zoonotic outbreaks . Anthropogenic activities are making zoonotic spillover
more likely, consequently favouring some viral hosts of animal species.
➢ Genetic factors are also responsible for zoonotic outbreaks, as many
responsible viruses are RNA viruses, including influenza, SARS, MERS, Ebola
and the recent SARS-CoV-2. It has been found that the RNA viruses are the
primary aetiologic agents of 44% of all emerging infectious diseases in the last
decades, as they have exceptionally short generation times and fast evolutionary
rates, which increase the likelihood of infecting new host species . NiV is also an
RNA virus, and because RNA viruses mutate so frequently, we cannot determine
whether a novel NiV will emerge within the next 5 years or later.

Conclusion
➢ For about 22 years, a Henipavirus strain, NiV, has occasionally emerged, with
several outbreaks in different areas killing many people. Scientists have
speculated that NiV is likely to be the next pandemic agent after COVID-19. A focus
on a One Health approach on animal – human – environment interfaces is
urgently needed to combat future outbreaks. Approaches should consider
surveillance of animal health or management of animal farms; monitoring human
and environment health; increasing food safety and food security associated with
zoonotic disease; checking environmental aspects such as deforestation and
sustainable land use for managing biodiversity; and engaging in global-level
participation or collaboration.
➢ Creating or maintaining scientific momentum as well as investing in studies
dealing with viruses and vaccine development should be a concern for world
leaders and policy makers. The ongoing unimaginable coronavirus pandemic has
shattered the world’s economy and highlighted the limitations of health- care
systems.

VAJIRAM & RAVI 3


Laws of Robotics

The three ‘Laws of Robotics’ propounded by Isaac Asimov are as follows:

• Law 1: A robot shall not injure a human or, by inaction allow a human to harm.

• Law 2: A robot shall obey orders given to it by a human, except where such orders are
not in contravention of the first law.

• Law 3: A robot must protect itself as long as such protection does not go against the first
or second law.

Collaborative industrial robots

These are developed to perform tasks in collaboration with workers in industrial sectors.
The industrial robots developed to work with humans in a shared workspace are
called ‘cobots’. Such robots are designed with a variety of technical features that ensure
they do not cause any harm to a worker when they come in direct contact, either
deliberately or by accident. To make such a cobot, lightweight materials, rounded
contours, padding, ‘skins’ (padding with embedded sensors) and sensors at the robot
base or joints that measure and control force and speed to ensure these do not go beyond
defined thresholds if contact occurs.

Safety with cobot

• The industrial machinery in the form of collaborative robots should ensure safety.
The International Organization for Standardization (ISO) has introduced four
types of collaborative mode as a standard for robots such that compliance with
these standards means the application of robot is safe.

• There is no certainty in the safety of the cobots, so the application should be based
on correct analysis, maintaining legal standards for health and safety at
workspace, and use of end effector that may be harmful.
Advantages of collaborative robots
• Collaborative robots are profitable to robotic automation. Cobots can be used for
automation of different components of a production line with slight variation to
the rest of the line, enabling companies that have not yet automated production
processes such as small- to medium-scale manufacturers as an entry to the
productivity and quality enhancement by using such cobots.
• The automotive manufacturers adopting automation of the production of cars, the
additional use of cobots support the workers in completing final assembly tasks
avoiding the risk of chronic back injuries. Cobots allow manufacturers to adopt
automation of parts of processes that are monotonous and tiresome for humans
such as fetching parts and feeding machines, assessing quality which is tedious for
humans.
• The lightweight cobots can be easily moved around the factory and utilise less
factory floor space which determines the cost factor for manufacturers. Industrial
robots often operate from a fixed mounting, but there is demand for mobile

VAJIRAM & RAVI 4


industrial robots that combine a mobile base and a (collaborative) robot. Mobile
cobots have the advantage of carrying the materials from one workstation to
another while unloading them or can feed a machine at a second workstation.
• Cobots are used to hold a heavy part steady in the required position for a worker
to fit screws. Cobots allow the manufacturers to enhance productivity by using
robots to complement human skills.

MACHINE LEARNING

Machine learning is a branch of artificial intelligence in which a computer program can


learn and adapt to new data without human interference. Machine learning is a
learning algorithm that helps increase accuracy of a predictable outcome of
performing a task. It creates a model that identifies the data and builds predictions
around the data it identifies. The model uses parameters built in the algorithm to form
patterns for its decision-making process. When new or additional data becomes available,
the algorithm automatically adjusts the parameters to check for a pattern change.
Companies and governments using ML create new insights that can be gained from
tapping into big data. It has augmented modelling of hazards, like earthquakes,
wildfires, and weather forecasts. This application uses time coded data from hundreds
or thousands of sensors including physical, like weather stations or earthquake stations,
or remote, such as satellites and other geophysical characteristics to predict hazard
output . To do so, data is gathered on the exposure, and the damage prediction algorithm
is trained using the impact of past events. Next, it infers and identifies the key aspects of
exposure that have an influence on the impact of disaster such as Flood Damage
Prediction and Machine Learning-Powered Seismic Resilience . Once trained, these
algorithms can be used to predict damage in different cities or countries. Post-disaster
event mapping and damage assessment are also emerging as key applications of machine
learning .

VAJIRAM & RAVI 5


Deep Learning

Deep learning is a subset of machine learning as a function of AI that mimics the


workings of the human brain in processing data for use in decision making. Deep
learning can learn from data that is both unstructured and unlabelled. Unstructured data,
is so vast that it may take decades for humans to comprehend it and extract relevant
information. Companies realize the incredible potential that can result from unravelling
this wealth of information and are increasingly adapting to AI systems for automated
support. It utilizes a hierarchical level of artificial neural networks to carry out the
process of machine learning. The artificial neural networks are built similar to the human
brain, with neuron nodes connected together like a web. While traditional programs build
analysis with data in a linear way, the hierarchical function of deep learning systems
enables machines to process data with a nonlinear approach. Deep learning has evolved
along with the digital era, that has brought about an explosion of data in all forms and
from every region of the world. This data known as big data, is drawn from diverse
sources social media, internet search engines, platform of e- commerce, and online
cinemas. This huge amount of data is readily accessible and can be shared
through fintech applications such as cloud computing.

KEY TAKEAWAYS

• Large language models utilize deep learning algorithms to recognize,


interpret, and generate human-sounding language.
• A large language model utilizes massive datasets, often featuring 100 million
or more parameters, in order to solve common language problems.
• Developed by OpenAI, ChatGPT is one of the most recognizable large
language models.
• Some of the ways in which large language models are used include content
creation, translation, and virtual chat or assistant applications

DIGITAL ECONOMY

The digital economy encompasses all the ways in which digital technologies are diffusing
into the economy. “The share of total economic output derived from a number of
broad “digital” inputs. These digital inputs include digital skills, digital equipment
(hardware, software and communications equipment) and the intermediate digital
goods and services used in production. Such broad measures reflect the
foundations of the digital economy”.

Main components of the digital economy

With digital technologies underpinning ever more transactions, the digital economy is
becoming increasingly inseparable from the functioning of the economy as a whole. The
different technologies and economic aspects of the digital economy can be broken down
into three broad components:

VAJIRAM & RAVI 6


1. Core aspects or foundational aspects of the digital economy, which comprise
fundamental innovations (semiconductors, processors), core technologies
(computers, telecommunication devices) and enabling infrastructures (Internet
and telecoms networks).
2. Digital and information technology (IT) sectors, which produce key products
or services that rely on core digital technologies, including digital platforms,
mobile applications and payment services. The digital economy is to a high degree
affected by innovative services in these sectors, which are making a growing
contribution to economies, as well as enabling potential spillover effects to other
sectors.

DATA TRAFFIC AND DATA CENTRES

The amount of data generated in the evolving digital economy is constantly and
rapidly increasing. Indeed, estimates provided by private companies are mind
boggling. A white paper by IBM on Marketing Trends for 2017 noted that 2.5
quintillion bytes of data are created every day. It added: “To put that into perspective,
90 percent of the data in the world today has been created in the last two years alone”.
Global Internet Protocol (IP) traffic, a proxy for data flows, has grown dramatically in
the past two decades. In 1992, global Internet networks carried approximately
100 gigabytes (GB) of traffic per day. Ten years later, it reached 100 GB per
second. Fast-forward to 2017, and such traffic had surged to more than 46,600
GB per second, reflecting both qualitative and quantitative changes in the
content. But despite the rapid growth to date, the world is only in the early stages of
the data-driven economy: by 2022 global IP traffic is projected to reach 150,700 GB
per second.

Another related key technology in the digital economy is data analytics, sometimes
dubbed as “big data”.This refers to the increasing capacity to analyse and
process massive amounts of data. Indeed, the above technologies have one element
in common, which is that they strongly rely on data. Digital data are one of the core
elements of value creation in the digital economy.

Artificial intelligence and data analytics

Developments in AI, including machine learning, are enabled by the large


amounts of digital data that can be analysed to generate insights and predict
behaviour using algorithms, as well as by advanced computer processing
power. AI is already in use in areas such as voice recognition and commercial
products (such as IBM’s Watson). It has been estimated that this general-purpose
technology has the potential to generate additional global economic output of around
$13 trillion by 2030, contributing an additional 1.2 per cent to annual GDP growth
(ITU, 2018b). At the same time, it may widen the technology gap between those that
have and those that do not have the capabilities to take advantage of this technology.
China and the United States are set to reap the largest economic gains from AI, while
Africa and Latin America are likely to see the lowest gains. China, the United States

VAJIRAM & RAVI 7


and Japan together account for 78 per cent of all AI patent filings in the world
(WIPO, 2019)

Challenges for developing countries

Developing countries face three main challenges in promoting equal access to


the benefits of frontier technologies:

• Income poverty – Many people in developing countries cannot afford new goods or
services, particularly those in rural areas. In this case the barriers are not
technological but economic and social.
• Digital divide – Many frontier technologies rely on steady, high-speed fixed Internet
connections, but almost half of the world’s population remains offline. Many
developing countries lack adequate digital infrastructure, and for most of their
people Internet costs are prohibitive.
• Shortage of skills – In developing countries, the basic and standard skills are on
average 10 to 20 percentage points lower than in developed countries . Many
frontier technologies require at least literacy and numeracy skills. Other
technologies require digital skills, including the ability to understand digital media,
to find information, and to use these tools to communicate with others.

Metaverse
ETAVERSE, combination of the prefix “meta” (implying transcending) with the word
M
“universe”, describes a hypothetical synthetic environment linked to the physical world.
The word ‘metaverse’ was first coined in a piece of speculative fiction named Snow Crash,
written by Neal Stephenson in 1992 . In this novel, Stephenson defines the metaverse
as a massive virtual environment parallel to the physical world, in which users
interact through digital avatars. Avatar refers to the digital representation of
players in the metaverse, where players interact with the other players or the
computer agents through the avatar . A player may create different avatars in different
applications or games. For example, the created avatar may be like a human shape,
imaginary creatures, or animals . In social communication, relevant applications that
require remote presence, facial and motion characteristics reflecting the physical human
are essential .

VAJIRAM & RAVI 8


Virtual reality (VR) aims to create a fully immersive user experience, replacing
physical reality with a digital environment. VR requires specialty hardware and is
most commonly achieved through headsets like the Oculus Quest that rely on
stereoscopic displays, spatial audio, and motion-tracking sensors to simulate a “real”
experience.

Augmented reality (AR) layers virtual elements onto real-world environments via
smartphones or heads-up displays (HUDs). Rather than focusing on immersion, AR
relies on software that extracts data from visual representations of the physical world to
overlay and superimpose computer generated sensory inputs such as sound, video,
graphics, or other virtual content such as annotations or real-time commentary.

Mixed reality (MR) is sometimes characterized as existing on a spectrum between AR and


VR, but it shares more in common with AR. The key difference is that MR aims to provide
a more interactive experience than basic AR applications. While basic AR simply overlays
information onto a physical environment, virtual content in an MR environment can
interact with and respond to non-digital objects or content in real time.

XR, sometimes referred to as “extended reality,” is an umbrella term often used to


encompass AR, VR, and MR, as well as other future immersive reality applications,
technologies, or experiences.
The most common type of hardware is a consumer-grade wearable headset, known as a
head-mounted display (HMD); more powerful, less expensive HMDs are the most likely
path to widespread adoption of immersive digital realities.

VAJIRAM & RAVI 9


What types of data are collected in XR?

› Sensor Information — Devices can include cameras, motion and depth sensors to
collect information about the immediate physical environment and physical movements.
• Audio Information — Devices can include microphones that can capture audio of the
user’s voice, as well as acoustic sound from the device’s surroundings. • Biometrically-
derived Information — Devices also include inward-facing sensors that can track pupil
measurements and gaze, as well as iris identification.
› Location Information — Devices can collect approximate location information using
the device’s IP address and may derive precise geolocation information and other
location information from data collected from the device as well as location-based
services including Wi-Fi and Bluetooth.
› Device Information — Devices can include log files that include information about
hardware and software, device identifiers, and IP addresses.
› Usage and Technical Information — Devices can collect information about the apps
used and purchased on XR platforms, including application telemetry, time spent using
app features, and interactions with other users.

5G and Edge Computing are potentially vital infrastructure for XR experiences.


Fifth generation mobile networks (5G) will bring not just ubiquitous, high-speed
connectivity, but also facilitate edge computing, which is a form of cloud computing
that brings digital content and computing resources closer to a user.This reduces
latency, which is important for immersive XR, and could facilitate the introduction of
lower cost, low-weight wearables like AR glasses.
› AR Cloud or Point Cloud is a persistent digital content layer that is mapped to objects
and locations in the physical world, providing a digital legend to annotate objects and
places in the physical world.35 For example, Niantic, the developer of Pokémon Go,
developed a shared “Real World Platform.”
› Smart Glasses or Digital Eyewear is the holy grail of visual AR.While mobile AR has
the drawback of forcing users to hold out their devices as viewfinders and static smart
screens or HUDs lack portability, a lightweight, fashionable, and always-on wearable
promises to be the next mainstream digital platform.

› Spatial Computing is an umbrella term for technologies that understand the physical
world and can communicate and navigate through those spaces.At a basic level, it can
include basic location sensors like GPS and location-based services including Wi-Fi and
Bluetooth, but XR requires highly precise location awareness to present real-time
immersive digital content.

› Computer Vision is a field of artificial intelligence that trains computers to interpret


and understand the visual world. In addition to mapping a visual environment, XR
experiences will benefit from advances in facial recognition and characterization as well
as object recognition.

Blockchain is a distributed database, in which data is stored in blocks, instead of


structured tables . The generated data by users are filled into a new block, which will be
further linked onto previous blocks. All blocks are chained in chronological order. Users

VAJIRAM & RAVI 10


store blockchain data locally and synchronise them with other blockchain data stored on
peer devices with a consensus model. Users are called nodes in the blockchain. Each node
maintains the complete record of the data stored on the blockchain after it is chained. If
there is an error on one node, millions of other nodes could reference to correct the error.

With the increasing demand for decentralised content creation in the metaverse, NFT is
playing a more critical role. NFT enables the created properties to be traded with
customised values. However, the research on NFT is still in the early phase. Currently,
most NFT solutions are based on Ethereum. Hence, the drawbacks, e.g., slow confirmation
and high transaction cost, are naturally inherited. Further- more, blockchain adopts the
proof of work as the consensus mechanism, which requires participants to spend effort
on puzzles to guarantee data security. However, the verification process for encrypted
data is not as fast as conventional approaches. Hence, faster proof of work to accelerate
the data accessing speed and scalability is a challenge to be solved. Currently, more than
60$ is required to mine an NFT token, which is obviously too much for small-scale
transactions. Anonymity is another challenge. Most NFT schemes adopt pseudo-
anonymity, instead of strict anonymity, which may lead to privacy leakage.

Object detection in the metaverse can be classified into two categories: detection of
specific instances (e.g., face, marker, text) and detection of generic categories (e.g., cars,
humans). Object detection is another fundamental scene understanding task aiming to
localise the objects in an image or scene and identify the class information for each object
. Object detection is widely used in XR and is an indispensable task for achieving the
metaverse. In the metaverse, action recognition can be very meaningful. A human avatar
needs to recognise the action of other avatars or objects so that the avatar can take the
correct action accordingly in the 3D virtual spaces. Moreover, human avatars need to
emotionally and psychologically understand others and the 3D virtual world in the
physical world. More adaptive and robust action recognition algorithms need to be
explored.

EDGE AND CLOUD

With continuous, omnipresent, and universal interfaces to information in the physical


and virtual world , the metaverse encompasses the reality-virtuality continuum and
allows user’s seamless experience in between. To date, the most attractive and widely
adopted metaverse interfaces are mobile and wearable devices, such as AR glasses,
headsets, and smartphones, because they allow convenient user mobility. However, the
intensive computation required by the metaverse is usually too heavy for mobile devices.

User Experienced Latency

In the metaverse, it is essential to guarantee an immersive feeling for the user to provide
the same level of experience as reality. One of the most critical factors that impact the
immersive feeling is the latency, e.g., motion to photon (MTP) latency.

The metaverse is transforming how we socialise, learn, shop, play, travel, etc. Besides the
exciting changes it’s bringing, we should be prepared for how it might go wrong. And
because the metaverse will collect more than ever user data, the consequence if things go

VAJIRAM & RAVI 11


south will also be worse than ever. One of the major concerns is the privacy risk . For
instance, the tech giants, namely Amazon, Apple, Google (Al- phabet), Facebook, and
Microsoft, have advocated password- less authentication for a long time, which verifies
identity with a fingerprint, face recognition, or a PIN.

Human- and user-centric networking

The metaverse is a user-centric application by design. As such, every component of the


multiverse should place the human user at its core. In terms of network design, such
consideration can take several forms, from placing the user experience at the core of
traffic management, to enabling user- centric sensing and communication.

Security and Privacy. As for security, the highly digitised physical world will require
users frequently to authenticate their identities when accessing certain applications and
services in the metaverse, and XR-mediated IoTs and mechanised everyday objects.
Additionally, protecting digital assets is the key to secure the metaverse civilisations at
scale. The security researchers would consider new mechanisms to enable application
authentications with alternative modalities, such as biometric authentication, which is
driven by muscle movements, body gestures, eye gazes, etc.

Trust and Accountability. The metaverse, i.e., convergence of XR and the Internet,
expands the definition of personal data, including biometrically-inferred data, which is
prevalent in XR data pipelines. Privacy regulations alone cannot be the basis of the
definition of personal data, since they cannot cope up with the pace of innovation. One of
the major grand challenges would be to design a principled framework that can define
personal data while keeping up with potential innovations. As the metaverse ecosystem
evolves, it must consider the rights of minorities and vulnerable communities from the
beginning, because unlike in traditional socio-technical systems, potential mistreatment
would have far more disastrous consequences, i.e., the victims might feel being
mistreated as if they were in the real world.

What Is a Non-Fungible Token (NFT)?


Non-fungible tokens or NFTs are cryptographic assets on blockchain with unique
identification codes and metadata that distinguish them from each other.
Unlike cryptocurrencies, they cannot be traded or exchanged at equivalency. This differs
from fungible tokens like cryptocurrencies, which are identical to each other and,
therefore, can be used as a medium for commercial transactions.

WHAT YOU NEED TO KNOW

• NFTs are unique cryptographic tokens that exist on a blockchain and cannot be
replicated.
• NFTs can be used to represent real-world items like artwork and real-estate.
• "Tokeninzing" these real-world tangible assets allows them to be bought, sold,
and traded more efficiently while reducing the probability of fraud.
• NFTs can also be used to represent peoples identities, property rights, and more.

VAJIRAM & RAVI 12


The distinct construction of each NFT has the potential for several use cases. For
example, they are an ideal vehicle to digitally represent physical assets like real estate
and artwork. Because they are based on blockchains, NFTs can also be used to remove
intermediaries and connect artists with audiences or for identity management. NFTs can
remove intermediaries, simplify transactions, and create new markets.

Much of the current market for NFTs is centered around collectibles, such as digital
artwork, sports cards, and rarities. Perhaps the most hyped space is NBA Top Shot, a
place to collect non-fungible tokenized NBA moments in a digital card form. Some of
these cards have sold for millions of dollars.

Understanding NFTs
Like physical money, cryptocurrencies are fungible i.e., they can be traded or exchanged,
one for another. For example, one Bitcoin is always equal in value to another Bitcoin.
Similarly, a single unit of Ether is always equal to another unit. This fungibility
characteristic makes cryptocurrencies suitable for use as a secure medium of
transaction in the digital economy.

VAJIRAM & RAVI 13


What is the hydrogen economy?

The hydrogen economy refers to using hydrogen, as both a fuel and in fuel cells, to
decarbonise economic sectors which are hard to electrify or switch to other alternative
sources of power. Vehicular emission, aviation, shipping, utility and heating are some of
the sectors where hydrogen can have the best benefits.

Unlike fossil fuels, when hydrogen is used as a fuel instead of hazardous greenhouse
gases, the only by-product is water vapour. Hydrogen for this reason is considered a great
alternative source of energy in an economy that uses low to no carbon.

The launch of the National Hydrogen Mission was announced by PM Modi.

Concept –

• The aim is to make India a global hub for the production and export of green
hydrogen.

VAJIRAM & RAVI 14


• The proposal for the National Hydrogen Mission was made in the Budget
2021 to launch NHM that would enable the generation of hydrogen “from green
power sources”.
• Currently imports 85% of its oil and 53% of gas demand, spending ₹12
trillion annually to meet the energy needs.
• As part of the 2015 Paris Agreement, India has pledged to generate 40 per cent
of its power through renewable energy — an aim it seeks to fulfil by 2030.

What is green hydrogen?


Green hydrogen is hydrogen produced by a process that does not emit any greenhouse
gases (such as carbon dioxide or methane). The best example of green hydrogen is the
hydrogen produced by splitting water using electricity from solar plants or wind
turbines.
What is not green hydrogen?
Hydrogen produced by a process that leaves some carbon footprint is not green
hydrogen. Most of the hydrogen today is produced by steam reforming of methane, which
produces some carbon dioxide. While there is no official definition of what any 'colour' of
hydrogen means, brown hydrogen, refer to hydrogen produced from coal. That produced
from natural gas or petroleum is grey hydrogen, and if we produce grey or brown
hydrogen but capture the carbon dioxide spewed and store it safely away, such hydrogen
might be called 'blue hydrogen.
Why is it in the news these days?
Green hydrogen is in the news today because the Government of India (like most other
countries in the world) is pitching for green hydrogen in the manufacture of fertilisers
and refining of petroleum. That, of course, is for starters. Eventually, any industry would
be made to turn to hydrogen for all its energy requirements. For example, the process of
steel making is essentially to kick-out the oxygen in the iron oxide (ore). Conventionally,
carbon has been used to pick up the oxygen, resulting in carbon dioxide emissions, but
even hydrogen can do the job.
The Government wants to make it mandatory for industries (first fertilisers and oil
refining) to use green hydrogen for a certain specified percentage of its overall energy
requirements. Such a requirement is called 'green purchase obligation' or GPO --
somewhat similar to the 'renewable purchase obligation' (RPO).
Is water splitting using renewable electricity the only way of producing green
hydrogen?
It is the most promising technology, but not the only one available. A few other pathways
exist and more are being discovered. For example, you can make hydrogen by feeding
biomass to microbes such as bacteria, either directly or with the help of enzymes. With
emerging technologies one could split water directly using sunshine, bypassing
electricity.
What are the challenges?
If we assume that currently the low hanging fruit is the electrolysis of water using
renewable energy, the major challenge is 'cost'. To bring down costs, the cost of the
electrolyser (the device that splits water) should come down, which is a function of scale-
-if more and more hydrogen plants are set up, the cheaper will be the cost. Another

VAJIRAM & RAVI 15


challenge is the efficiency of the electrolysers--basically, how much electricity it
consumes to produce a kg of hydrogen. Today, it is 55 kWhr per kg of hydrogen.
Will it help India become net-zero nation?
Hydrogen is the cleanest fuel and it should play an important role in India's net-zero
ambitions.

• According to WEC, as of 2019, “96 per cent of hydrogen is produced from fossil
fuels via carbon intensive processes”. Hydrogen thus obtained is called ‘grey’
hydrogen as the process, though not as expensive as the other methods, releases
a lot of carbon dioxide.
• Hydrogen can be stored physically as either a gas or a liquid.
o Storage of hydrogen as a gas typically requires high-pressure tanks.
o Storage of hydrogen as a liquid requires cryogenic
temperatures because the boiling point of hydrogen at one atmosphere
pressure is −252.8°C.
o Hydrogen can also be stored on the surfaces of solids (by adsorption) or
within solids (by absorption).

Policy Challenges:

1. One of the biggest challenges faced by the industry for using hydrogen
commercially is the economic sustainability of extracting green or blue
hydrogen.
2. The technology used in production and use of hydrogen like Carbon Capture and
Storage (CCS)and hydrogen fuel cell technology are at nascent stage and are
expensive which in turn increases the cost of production of hydrogen.
3. Maintenance costs for fuel cells post-completion of a plant can be costly.
4. The commercial usage of hydrogen as a fuel and in industries requires
mammoth investment in R&D of such technology and infrastructure for
production, storage, transportation and demand creation for hydrogen.

VAJIRAM & RAVI 16

You might also like