Professional Documents
Culture Documents
Handout 52
Handout 52
» Introduction
ú IoT is a network of physical objects embedded with sensors, software, and other technologies for connecting and
exchanging data with other devices and systems via the internet.
ú A thing in the internet of Things, can be a person with a heart monitor implant, a farm animal with a biochip
transponder, an automobile with a built-in-sensors to alert the driver when tire pressure is low - or any other
natural or manmade object that can be assigned an IP address and provided with the ability to transfer data over
a network.
ú Movement from IPV4 (32 bit address ) to IPV6 (128 bit address) also played a role in making IoT possible.
» Advantages
ú Reduce waste, loss, and cost -> by early detection of problems and taking corrective steps
ú We would know what things needed replacing, repairing, or recalling and whether they were fresh or past their
best. This helps in increasing the reliability of a device.
S
IA
- Applications
om
a. Health Care Sector: IoT can improve the reliability and performance of the life-critical system. For e.g., the IOT
l.c
P
based devices can be used in combination with cardiact monitor to raise an alarm to the doctors in case of
ai
abnormality.
lU
gm
b. Agriculture Sector: IoT can be used to gather live pedological data that can be used by scientists to improve the
66 6@
c. Transportation Sector:
70 on v
m
ú Traffic Management – real time traffic data -> better traffic management.
m
d. Energy Management
ú Managing temperature in a Nuclear Power Plant (using sensors and IoT)
ú Real time efficiency analysis of Solar Power panels.
e. Research and Development:
ú E.g. – Recent development of wireless communication system for satellites by NASA through which Satellites
can communicate with each other.
f. Safety and Security
ú Real time tracking of criminals – using tagging and IoT.
» Narrowband IoT (also known as NB-IoT or LTE-M2) is Low Powered Wide Area Network (LPWAN) technology
IA
which doesn’t operate in the licensed LTE construct. It is a cellular network technology primarily built to connect
om
things.
l.c
P
ai
ú Cost efficiency: For e.g., the device complexity of analog to digital and digital to analog converter would
70 on v
m
reduce. NB-IoT chips are simpler to create and thus come cheaper
Le
ú Power efficiency: Low device power consumption -> improved battery life
76
ú Connecting Remote Areas where LTE networks are still not available. In these areas there may be wider GSM
deployments on which unused bands to leverage for NB-IoT could be found.
» NB-IoT focuses specifically on indoor coverage, low cost, long battery, and high connection density. It uses a
subset of the LTE standards but limits the bandwidth to a single narrow-band of 200 kHz.
- Applications
» Smart Metering
» Intruder and Fire Alarms
» Connected personal appliances measuring health indicators.
» Tracking of persons, animals, or objects
» Smart city infrastructure such as streetlamps or dustbins.
4. BIG DATA
- Intro
» Big Data is a collection of data that is huge in volume (petabytes and exabytes of data) yet growing exponentially
with time. It is a data with so large size and complexity that none of the traditional data management tools can store
or process it efficiently. Big Data can be structured, semi-structured and Unstructured. But they generally have
potential to be mined for information.
» Examples of Big Data:
o BSE which generates Gigabytes of data per day
o Social Media – Around 500+ terabytes of new data get ingested into the database of social media site
Facebook every day.
o Data from search engines (like Google, Bing etc.) and Online portals like Amazon.
- Challenges include capture, analysis, data curation, search, sharing, storage, transfer, visualization, querying, updating,
and information privacy.
- Big data is characterized by 3 Vs – Volume, Velocity and Variety.
» Environment research
om
ú (At sports, at home or work), where data from wearable sensors in equipment and wearable devices
ai
can be combined with video analytics to get insights that traditionally where impossible to achieve)
lU
gm
66 6@
» Security Agencies
23 ika e
93 13
ú Foil terrorism
76
5. CLOUD COMPUTING
» Why in news?
Concerns
§ Loss of control over certain sensitive data
§ Limited customization options
§ E.g., a restaurant with a limited menu is cheaper than a personal chef who can cook anything you
want.
- Technology behind cloud: There are two vital technologies at the heart of Cloud Computing:
▫ Virtualization: It lets computer resource to be shared through multiple virtual machines.
▫ Network: It lets data requests flow to and from the datacenters or the Cloud through the Internet.
In cloud computing hardware resources are distributed across multiple locations and there is diverse choice of
software that is available to consumers.
- Service Models
§ Infrastructure as service (IaaS)
§ Refers to online services that abstract the user from the details of infrastructure like physical
S
§ The provider typically develops toolkit and standards for development and channels for distribution
l.c
P
and payment.
ai
lU
§ In PaaS model, cloud providers deliver a computing platform, typically including operating system,
gm
§ User gain access to application software and databases (e.g. Google Photos – In this consumer pays
m
Le
Conclusion: When cloud computing’s potential is used well, one can launch rich features faster and put customers
m
on cloud nine
7. EDGE COMPUTING
- Why in news?
» The number of IoT connected devices will reach 25 billion by the end of 2025. Such a large number of devices
in constant communication will require huge amount of data that is quick to process and yet sustainable. Cloud
Computing and Fog Cloud computing have been traditionally used for IoT solutions. However, this trend is
changing with the introduction of edge computing.
- Basics
» Edge computing is the practice of storing, processing, and analyzing data near the edge of the network, where
the data is being generated, instead of centralized data processing warehouses. It is possible by placing
networked computing resources as close as possible to where data is created.
» The explosive growth of internet-connected devices – the IOT – along with new applications that require real-
time computing power, continue to derive the edge-computing systems.
» Need – With increase in number of devices transmitting data at the same time grows (with growth of IoT and
5G), the internet speed will suffer, and cost of bandwidth will be tremendous. Edge computing was developed
to solve this problem.
- Significance/Need/Advantages
ú Cost reduction was a key reason which gave rise to the idea of edge computing.
ú Real time data processing without latency – Edge computing will allow smart applications to respond to data
almost instantaneously as it is being created eliminating the lag time.
ú Technologies like self- driving cars will need this technology to work in real time.
ú Efficient Data processing and Reduced internet bandwidth requirement
S
ú Better Security for Sensitive Data: the ability to process data without ever putting it into public cloud adds a
IA
» Data Security Concerns: Data at the edge may not be as secured as could have been there in a centralized
lU
gm
cloud-based system.
66 6@
23 ika e
» Distributing the work among edge devices/servers would be a big data management challenge.
93 13
70 on v
» Intro
ú Artificial Intelligence is the science and engineering of making intelligent machines, especially intelligent
computer programs which can complete tasks that typically require human intelligence.
» With the explosion of available data and expansion of computing capacity, the world is witnessing
rapid advancements in AI, ML, and deep learning.
- Machine learning is a science that involves development of self-learning algorithms. Machine learning uses
statistics (mostly inferential statistics) to develop self-learning algorithm. It is a type of artificial intelligence.
» Note: All Machine Learning is AI, but not all AI is machine learning
» For e.g., symbolic logic (rules engines, expert systems, and knowledge graphs) as well as evolutionary
algorithms and Bayesian statistics could all be described as AI, and none of them are machine learning.
ú Disaster Management: An AI-based flood forecasting system has been deployed in Bihar and is now
being deployed throughout the country. It gives warnings 48 hours earlier about impending floods.
IA
1) NEURAL NETWORKS
93 13
70 on v
m
Le
76
m
- Note: ANN also rely on training data to learn and improve their accuracy over time.
2) DEEP LEARNING
IA
om
- Deep learning is a machine learning technique that teaches computers to do what comes naturally to humans: learn
l.c
by example. In deep learning, a computer model learns to perform classification tasks directly from images, text, or
P
ai
sound. It can achieve state of art accuracy, sometimes exceeding human-level performance. Models are trained by
lU
gm
using a large set of labeled data and neural network architecture that contain many layers.
66 6@
- Most deep learning methods use neural network architecture, which is why deep learning models are often referred
23 ika e
as Deep Neural networks. The term deep usually refers to number of hidden layers in the neural network.
93 13
70 on v
-
m
» Automated Driving: Automotive researchers are using deep learning to automatically detect objects such as stop
76
signs and traffic lights. In addition, deep learning is used to detect pedestrians, which helps decrease accidents
m
» Aerospace and Defence: Deep learning are used to identify objects from satellites that locate areas of interest
and identify safe or unsafe zones for troops.
» Medical Research: To detect cancer
» Industrial Automation: Improve work safety around heavy machinery by automatically detecting when people or
objects are within an unsafe distance of machines.
» Electronics: Used in automated hearing and speech translation. For e.g., home assistance devices that respond to
your voice and know your preferences are powered by deep learning applications.
- Why in news?
- Proteins:
» Proteins, along with nucleic acid sequences that make a genome, forms the basis of all organisms.
» Proteins are ubiquitous in all organisms. By comparing and analyzing protein structures, it is possible to
get ideas about biological evolution, diseases, defence mechanisms etc. This explains the human quest for
finding the structures of proteins. In 1972, Christian B. Anfinsen won the Nobel Prize in Chemistry for his
experiments that showed that a protein could fold into its structure based on the information contained
in the sequence of amino acids. Since, then scientists around the world have been trying to
computationally predict protein structures.
- History:
» In 1960s, first protein structure of myoglobin and hemoglobin was determined. This was done through X-
Ray crystallography that uses protein crystals and x-rays. Knowing the structure of hemoglobin helped
people understand how it is able to perform its function of transporting oxygen from the lungs to the cells
in the body. It also showed how changing a single amino acid can cause sickle cell Anaemia.
(CASP). Under this scientist who have experimentally determined protein structures voluntarily don’t
om
submit the structure to the public database but make available the protein sequence for a structural
l.c
P
prediction challenge. That way, the predicted structures could be compared to the experimentally
ai
lU
gm
determined ones without any bias in prediction. Under this a quantitative measure called Global Distance
Test (GDT) was also devised [Score 100 – Predicted structure matched perfectly; Score 0 – if there was no
66 6@
23 ika e
match]
93 13
70 on v
» Artificial Intelligence:
m
Le
ú In 2018, AlphaFold from DeepMind joined the game breathing ‘deep learning’ AI techniques into the
prediction algorithm. After various changes, they were able to get a median score at the level of
76
ú This, according to John Moult, meant that the problem of structure prediction for compact single
module protein is essentially solved.
4) DEEP FAKES
- Why in news?
» Deepfakes have emerged as a new tool to spread computational propaganda and disinformation at scale and with
speed
- Advantages: Synthetic Media/ Deepfakes can create possibilities and opportunities for all people, regardless of how
people listen, speak, or communicate. It can give people voice, purpose, and ability to make an impact at scale and
with speed.
- Concerns: - can be weaponized to inflict harm, political misuse, increased crime against women, Endanger social
harmony etc.
S
IA
om
l.c
P
ai
lU
gm
66 6@
23 ika e
93 13
70 on v
m
Le
76
m