Professional Documents
Culture Documents
Eti Project
Eti Project
GOVERNMENT POLYTECHNIC,BEED
[Institute Code:0032 ]
MICROPROJECT
Seal of
institute
1
MAHARASHTRA STATE BOARD OF TECHNICAL EDUCAT
ION,MUMBAI
CERTIFICATE OF MICROPROJECT
Seal of
Institute
2
Teacher Evaluation Sheet
Sr. Excellent
Poor Average Good
No Characteristic to be assessed ( Marks 1 - 3 ) ( Marks 4 - 5 ) ( Marks 6 - 8 )
( Marks 9- 10
. )
[A] Process and Product Assessment (Convert total marks out of 06)
1 Relevance to the course
Literature
2
Review/information collection
Completion of the Target as
3
per project proposal
Analysis and data
4
representation
5 Quality of Prototype/Model
6 Report Preparation
Total Marks Out of (6)
[B] Individual Presentation/Viva (Convert total marks out of 04)
1 Presentation
2 Viva
Dated Signature………………………………………………………………………
3
Annexure – II
5
4.0 Literature Review
The literature on Artificial Intelligence (AI) encompasses its historical trajectory,
technological advancements, diverse applications, ethical considerations, and future
trajectories. Originating from foundational work in the mid-20th century and formalized
during the Dartmouth Conference in 1956, AI has seen a resurgence driven by
breakthroughs in machine learning, particularly deep learning. This has facilitated its
integration across various industries, including healthcare, finance, manufacturing, and
transportation, where it's employed for tasks ranging from medical imaging analysis to
algorithmic trading. However, alongside its proliferation, ethical concerns have arisen,
such as algorithmic bias and privacy violations, necessitating the development of
frameworks and regulations to ensure responsible AI deployment. Looking forward,
trends such as explainable AI and federated learning offer promise, but challenges
persist, including the need for greater interpretability and robustness in AI systems.
Ultimately, interdisciplinary collaboration, ethical governance, and continued innovation
will be crucial in harnessing AI's potential while mitigating its risks for the benefit of
society.
6
Microproject
On
Of
ARMAN SHAIKH
7
Aim
The aim of a case study about artificial intelligence (AI) can vary based on the specific
context, industry, and objectives. Here are several potential aims for a case study on AI:
Assessing Future Trends: Predict future trends in AI adoption and innovation based on
current case studies and industry developments. Discuss potential opportunities and
challenges that may arise as AI continues to evolve.
8
Introduction
According to some, artificial intelligence is the most promising development for the future.
From curing cancer to resolving the global hunger crisis, artificial intelligence is being
presented as the solution to all of our problems. Others, however, regard it as a threat
artificial intelligence may potentially give rise to unemployment and inequality, and could
even jeopardize the continued existence of humankind. As the technology entrepreneur
Elon Musk put it: “The benign scenario is that artificial intelligence can do any job that
humans do – but better.” Deloitte has positioned itself on the optimistic side of that
spectrum. “We believe that artificial intelligence will be extremely helpful to us and to our
clients”, says Richard Roovers, a partner at Deloitte Netherlands and Innovation Lead
Transformational Solutions North-West Europe. Artificial intelligence will enable us to
solve problems that humans are unable, or hardly capable, of solving, explains Richard.
“Artificial intelligence is capable of processing massive quantities of data and has the
ability to discover patterns that even the smartest mathematicians are unable to find. That
in itself opens up a large number of new possibilities.” Those new possibilities are what
this book is about. The case studies provide an overview of the ways in which Deloitte is
working to develop applications incorporating artificial intelligence – both internally and
for use with clients. The applications are diverse, make use of different technologies and
can be found in a diverse range of industries. This shows that aside from all of the
predictions for the future, artificial intelligence has already been a reality in the business
sector for some time and forms a resource that could possibly provide your company with
a decisive lead. The concept of AI dates back several decades, but recent breakthroughs in
computational power, big data, and algorithmic sophistication have propelled its rapid
evolution. Today, AI applications permeate nearly every aspect of our lives, from virtual
assistants on our smartphones to predictive algorithms powering financial markets, from
personalized recommendations on streaming platforms to autonomous vehicles navigating
city streets. One of the key drivers behind the proliferation of AI is its ability to analyze
vast amounts of data with remarkable speed and accuracy, uncovering insights and patterns
that would be virtually impossible for humans to discern. This data-driven approach has
unlocked new opportunities for businesses to enhance decision-making, optimize
processes, and create personalized experiences for customers. However, the widespread
adoption of AI also raises important ethical, societal, and economic considerations.
Concerns about data privacy, algorithmic bias, job displacement, and the concentration of
power in the hands of a few tech giants have sparked debates about the responsible
development and deployment of AI technologies. In this age of rapid technological change,
understanding the capabilities, limitations, and implications of AI is more important than
ever. This introduction sets the stage for exploring the multifaceted landscape of artificial
intelligence, delving into its applications, challenges, opportunities, and the profound
impact it continues to have on our world.
9
Machine Learning
Machine learning is the study of programs that can improve their performance on a given
task automatically.[41] It has been a part of AI from the beginning.[e]
There are several kinds of machine learning. Unsupervised learning analyzes a stream of
data and finds patterns and makes predictions without any other guidance. [44] Supervised
learning requires a human to label the input data first, and comes in two main
varieties: classification (where the program must learn to predict what category the input
belongs in) and regression (where the program must deduce a numeric function based on
numeric input).[45] In reinforcement learning the agent is rewarded for good responses and
punished for bad ones. The agent learns to choose responses that are classified as
"good".[46] Transfer
learning is when the
knowledge gained from one
problem is applied to a new
problem.[47] Deep
learning is a type of machine
learning that runs inputs
through biologically
inspired artificial neural
networks for all of these
types of learning.[48]
Computational learning
theory can assess learners
by computational
complexity, by sample
complexity (how much data
is required), or by other notions of optimization.
10
Natural language processing
Natural language processing (NLP) allows programs to read, write and communicate in
human languages such as English. Specific problems include speech recognition, speech,
synthesis, machine translation, information extraction, information retrieval and question
answering.[51] Early work, based on Noam Chomsky's generative grammar and semantic
networks, had difficulty with word-sense disambiguation[f] unless restricted to small
domains called "micro-worlds" (due to the common sense knowledge problem). Margaret
Master man believed that it was meaning, and not grammar that was the key to
understanding languages, and that thesauri and not dictionaries should be the basis of
computational language structure. Modern deep learning techniques for NLP include word
embedding (representing words, typically as vectors encoding their
meaning), transformers (a deep learning architecture using an attention ), and others. In
2019, generative pre-trained transformer (or "GPT") language models began to generate
coherent text, and by 2023 these
models were able to get human-
level scores on the bar
exam, SAT test, GRE test, and
many other real-world
application. Natural language
processing (NLP) is a subfield
of Artificial Intelligence (AI).
This is a widely used
technology for personal
assistants that are used in
various business fields/areas. This technology works on the speech provided by the user
breaks it down for proper understanding and processes it accordingly. This is a very
recent and effective approach due to which it has a really high demand in today’s market.
Natural Language Processing is an upcoming field where already many transitions such
as compatibility with smart devices, and interactive talks with a human have been made
possible. Knowledge representation, logical reasoning, and constraint satisfaction were
the emphasis of AI applications in NLP. Here first it was applied to semantics and later
to grammar. In the last decade, a significant change in NLP research has resulted in the
widespread use of statistical approaches such as machine learning and data mining on a
massive scale. The need for automation is never-ending courtesy of the amount of work
required to be done these days. NLP is a very favorable, but aspect when it comes to
automated applications.
11
Deep learning
Deep learning[108] uses several layers of neurons between the network's inputs and outputs.
The multiple layers can progressively extract higher-level features from the raw input. For
example, in image processing, lower layers may identify edges, while higher layers may
identify the concepts relevant to a human such as digits or letters or faces. [110]
Deep learning has profoundly improved the performance of programs in many important
subfields of artificial intelligence, including computer vision, speech recognition, natural
language processing, image classification[111] and others. The reason that deep learning
performs so well in so many applications is not known as of 2023. [112] The sudden success
of deep learning in 2012–2015 did not occur because of some new discovery or theoretical
breakthrough (deep neural networks and backpropagation had been described by many
people, as far back as the 1950s)[i] but because of two factors: the incredible increase in
computer power (including the hundred-fold increase in speed by switching to GPUs) and
the availability of vast amounts of training data, especially the giant curated datasets used
for benchmark testing, such as ImageNet.[j] By strict definition, a deep neural network, or
DNN, is a neural network with three or more layers. In practice, most DNNs have many
more layers. DNNs are trained on large amounts of data to identify and classify
phenomena, recognize patterns and relationships, evaluate possibilities, and make
predictions and decisions. While a single-layer neural network can make useful,
approximate predictions and decisions, the additional layers in a deep neural network help
refine and optimize those outcomes for greater accuracy. Deep learning drives
many applications and services that improve automation, performing analytical and
physical tasks without human intervention. It lies behind everyday products and
services—e.g., digital assistants, voice-enabled
TV remotes, credit card fraud detection—as
well as still emerging technologies such as self-
driving cars and generative AI.
12
Chat GPT
Generative pre-trained transformers (GPT) are large language models that are based on the
semantic relationships between words in sentences (natural language processing). Text-
based GPT models are pre-trained on a large corpus of text which can be from the internet.
The pre-training consists in predicting the next token (a token being usually a word, sub
word, or punctuation). Throughout this pre-training, GPT models accumulate knowledge
about the world, and can then generate human-like text by repeatedly predicting the next
token. Typically, a subsequent training phase makes the model more truthful, useful and
harmless, usually with a technique called reinforcement learning from human
feedback (RLHF). Current GPT models are still prone to generating falsehoods called
"hallucinations", although this can be reduced with RLHF and
quality data. They are used in chatbots, which allow you to ask a
question or request a task in simple text. Current models and
services include:
Gemini
(formerlyBard), ChatGPT, Grok, Claude, Copilot and LLaMA. Multimodal GPT models
can process different types of data (modalities) such as images, videos, sound and text.
13
Example’s
AI-Powered Medical Imaging Analysis
Background: XYZ Hospital, a leading healthcare institution, is facing challenges in
efficiently analyzing medical images such as X-rays, MRIs, and CT scans. The manual
interpretation of these images by radiologists is time-consuming and can lead to delays in
diagnosis and treatment. To address this issue, the hospital decides to implement an AI-
powered medical imaging analysis system.
Objective: The objective of this case study is to showcase how AI technology can
enhance the efficiency and accuracy of medical image analysis, leading to improved
patient outcomes and operational efficiency for healthcare providers.
Implementation:
14
Algorithm Development:
Data scientists and medical experts collaborate to develop machine learning algorithms
capable of detecting and classifying abnormalities in medical images.
Deep learning techniques, such as convolutional neural networks (CNNs), are utilized
due to their effectiveness in image recognition tasks.
Improved Efficiency:
The AI system significantly reduces the time required for image analysis, allowing
radiologists to prioritize complex cases and make quicker treatment decisions.
Turnaround times for reporting are reduced, leading to faster diagnosis and treatment
initiation for patients.
Enhanced Accuracy:
AI algorithms demonstrate high accuracy in detecting abnormalities, including subtle
signs that may be overlooked by human observers.
The combination of AI-driven insights and radiologist expertise leads to more
comprehensive and accurate diagnoses.
Cost Savings:
By streamlining the imaging analysis process, the hospital realizes cost savings through
increased efficiency and reduced reliance on manual labor.
Challenges and Considerations:
Regulatory Compliance:
Compliance with healthcare regulations, such as HIPAA in the United States, requires
careful handling of patient data to ensure privacy and security.
Ethical and Legal Implications:
The use of AI in medical decision-making raises ethical considerations regarding
accountability, transparency, and patient consent.
Clear guidelines and protocols must be established to address these concerns and ensure
responsible AI deployment.
References
✓ www.greeksforgreeks.org
✓ https://www.tutorialspoint.com/
✓ https://en.wikipedia.org/
✓ www.slideshare.net
✓ www.copilot.com
16
- AI is a branch of computer science that deals with the creation of intelligent agents,
which are systems that can reason, learn, and act autonomously. AI research has been
highly successful in developing effective techniques for solving a wide range of
problems, from game playing to medical diagnosis.
History of AI
The concept of intelligent machines has been around for centuries, but the field of AI as we
know it today emerged in the mid-20th century. Alan Turing's 1950 paper, "Computing
Machinery and Intelligence," introduced the Turing test, a proposed test of a machine's ability
to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human.
In the following decades, AI research experienced periods of both progress and stagnation.
The early enthusiasm for AI was dampened by the limitations of computing power and the
difficulty of developing algorithms that could effectively learn and reason. However, in recent
years, advances in machine learning and deep learning have led to a resurgence of interest in
AI.
Core Concepts of AI
There are several key concepts that underpin the field of AI:
• Machine Learning: Machine learning algorithms allow computers to learn from data
without being explicitly programmed. This enables them to identify patterns and make
predictions based on new data.
• Deep Learning: Deep learning is a subfield of machine learning that uses artificial
neural networks, which are inspired by the structure and function of the human brain.
17
Deep learning algorithms have been particularly successful in tasks such as image
recognition and natural language processing.
• Natural Language Processing (NLP): NLP is a field of computer science that deals
with the interaction between computers and human language. NLP algorithms enable
computers to understand and generate human language, which is essential for tasks
such as machine translation and chatbots.
• Computer Vision: Computer vision is a field of computer science that deals with the
extraction of information from images and videos. Computer vision algorithms enable
computers to "see" and understand the world around them, which is essential for tasks
such as self-driving cars and facial recognition.
• Robotics: Robotics is a field of engineering that deals with the design, construction,
operation, and application of robots. Robots are machines that can sense their
environment and take actions in the world. AI plays an increasingly important role in
robotics, as robots become more sophisticated and capable of autonomous behavior.
Applications of AI
AI is already being used in a wide range of applications, and its impact is only going to grow in
the coming years. Here are some examples of how AI is being used today:
Benefits of AI
AI has the potential to bring about a wide range of benefits for society. Some of the potential
benefits of AI include:
• Increased productivity and efficiency: AI can automate tasks that are currently
performed by humans, freeing up human time and resources for other activities.
• Improved decision-making: AI can analyze large amounts of data to identify patterns
and trends that humans might miss. This can lead to better decision-making in a variety
of areas.
• Enhanced innovation: AI can be used to develop new products and services that
would not be possible without it.
• Improved quality of life: AI can be used to address some of the world's most pressing
challenges, such as poverty, disease, and climate change.
18
Challenges of AI
While AI has the potential to bring about many benefits, there are also some challenges that
need to be addressed. Some of the challenges of AI include:
19
Conclusion
In conclusion, the case study exemplifies the profound impact of artificial intelligence (AI)
across industries, showcasing its transformative potential in healthcare. Through the
implementation of an AI-powered medical imaging analysis system, XYZ Hospital has
achieved remarkable improvements in efficiency, accuracy, and cost-effectiveness.
Thank You….
20