Download as pdf or txt
Download as pdf or txt
You are on page 1of 21

MAHARASHTRA STATE BOARD OF TECHNICAL EDUCATION,MUMBAI

GOVERNMENT POLYTECHNIC,BEED
[Institute Code:0032 ]

MICROPROJECT

Course & Code: ETI (22618)


Title of Micro project: Prepare a report on Artificia lIntelligence

Subject Teacher Head of Department Principal


Mr.Waghmare sir WaghmarA.K Dr.M.R.Lohokare

Seal of
institute
1
MAHARASHTRA STATE BOARD OF TECHNICAL EDUCAT

ION,MUMBAI

CERTIFICATE OF MICROPROJECT

This is to certify that following students of CM6I (Division-A/B) of Diploma in


COMPUTER ENGINEERING of the institute GOVERNMENT POLYTECHNIC,BEED,
Institute code:0032, have satisfactorily completed MICROPROJECT work in Operating
System for academic year 2023-24 as prescribed in the curriculum.

Roll Exam Seat


Name of Student Title of Microproject
No. No.

Arman Amin Shaikh Prepare a report on


347 385000
Artificia lIntelligence

Place:Beed Date:- / /2024

Subject Teacher Headof Department Principal


Mr.Waghmare sir Waghmare.A.K M.R.Lohokare

Seal of
Institute
2
Teacher Evaluation Sheet

Name of Student: Arman Amin Shaikh Enrollment No. 2100320115


Programme: Computer Technology Semester:cm6i
Course Title & Code: Management(22509) Roll no:347
Title of the Micro-Project: Prepare a report on Artificia lIntelligence
Evaluation as per Suggested Rubric for Assessment of Micro Project

Sr. Excellent
Poor Average Good
No Characteristic to be assessed ( Marks 1 - 3 ) ( Marks 4 - 5 ) ( Marks 6 - 8 )
( Marks 9- 10
. )
[A] Process and Product Assessment (Convert total marks out of 06)
1 Relevance to the course
Literature
2
Review/information collection
Completion of the Target as
3
per project proposal
Analysis and data
4
representation
5 Quality of Prototype/Model
6 Report Preparation
Total Marks Out of (6)
[B] Individual Presentation/Viva (Convert total marks out of 04)
1 Presentation
2 Viva

Total Marks Out of (4)


Micro-Project Evaluation Sheet
Process and Product Assessment Total Marks
(6 marks) Individual Presentation/Viva 10
(Note: The total marks taken from the above Rubrics is (4 marks)
to be converted in proportion of ‘6’ marks)

Name and designation of the Teacher Mr.Waghmare sir (Lecturer in CM)

Dated Signature………………………………………………………………………

3
Annexure – II

Part – B Micro-Project Report


(Outcomes after Execution) Format for Micro-Project Report (Minimum 6 pages)

Title of Micro-Project Prepare a report on Artificia lIntelligence


1.0 Rationale
1. Significance in Modern Society: AI has become pervasive in various aspects of modern
life, from personalized recommendations on streaming platforms to complex decision-
making in healthcare and finance. Understanding AI is essential for individuals,
businesses, and policymakers to navigate its implications effectively.
2. Economic Impact: AI is reshaping industries, creating new job opportunities, and
driving economic growth. An understanding of AI's potential economic impact is crucial
for businesses and policymakers to harness its benefits and mitigate potential
challenges, such as job displacement.
3. Technological Advancement: The rapid advancement of AI technologies, such as
machine learning and deep learning, has led to groundbreaking innovations with far-
reaching implications. Exploring the underlying technologies driving AI enables a
deeper understanding of its capabilities and limitations.
4. Ethical and Societal Implications: AI raises profound ethical and societal questions,
including concerns about bias in algorithms, data privacy, and the impact on human
autonomy and labor markets. Delving into these ethical considerations is essential for
fostering responsible AI development and deployment.

2Aims/Benefits of the Micro-Project


Focused Learning Objectives:
Aim: To deepen understanding of key concepts and principles related to AI.
Benefit: By focusing on specific aspects of AI, participants can develop a solid
foundation in relevant topics, enabling them to grasp fundamental concepts more
effectively.
Hands-on Experience:
Aim: To provide practical experience through hands-on activities or projects related
to AI.
Benefit: Engaging in practical tasks allows participants to apply theoretical
knowledge to real-world scenarios, enhancing comprehension and skill development
in AI.
Skill Enhancement:
Aim: To improve participants' technical skills in AI-related tools, programming
languages, or platforms.
4
Benefit: By actively working on AI projects, participants can enhance their
proficiency in using AI tools and technologies, thereby increasing their marketability
and employability in AI-related fields.
Problem-Solving Skills:
Aim: To develop participants' problem-solving abilities by tackling challenges within
the context of AI projects.
Benefit: Working on AI micro-projects requires participants to identify, analyze, and
solve problems, fostering critical thinking and problem-solving skills essential for
success in AI-related domains.
helping organizations ensure adherence to labor laws and avoid legal disputes or
penalties.

3.0 Course Outcomes Achieved


1. Understanding of AI Concepts:
• Outcome: Participants demonstrate a clear understanding of fundamental
concepts and principles related to AI, including machine learning, deep learning,
neural networks, and natural language processing.
2. Application of AI Techniques:
• Outcome: Participants can apply AI techniques and algorithms to solve real-
world problems and tasks, demonstrating proficiency in implementing AI
solutions using relevant tools, frameworks, and programming languages.
3. Critical Thinking and Problem-Solving Skills:
• Outcome: Participants exhibit strong critical thinking skills by identifying,
analyzing, and solving complex problems within the context of AI projects. They
can evaluate different approaches and strategies, selecting the most appropriate
solutions based on the problem requirements.
4. Technical Proficiency in AI Tools and Technologies:
• Outcome: Participants demonstrate technical proficiency in using AI tools,
platforms, and programming languages commonly employed in AI development
and research. They can effectively utilize tools such as TensorFlow, PyTorch,
scikit-learn, and others to implement AI models and algorithms.
5. Collaboration and Teamwork Skills:
• Outcome: Participants exhibit effective collaboration and teamwork skills by
working collaboratively on AI projects with peers. They can communicate ideas,
• share resources, and coordinate efforts to achieve common goals, demonstrating
the ability to work effectively in multidisciplinary teams.

5
4.0 Literature Review
The literature on Artificial Intelligence (AI) encompasses its historical trajectory,
technological advancements, diverse applications, ethical considerations, and future
trajectories. Originating from foundational work in the mid-20th century and formalized
during the Dartmouth Conference in 1956, AI has seen a resurgence driven by
breakthroughs in machine learning, particularly deep learning. This has facilitated its
integration across various industries, including healthcare, finance, manufacturing, and
transportation, where it's employed for tasks ranging from medical imaging analysis to
algorithmic trading. However, alongside its proliferation, ethical concerns have arisen,
such as algorithmic bias and privacy violations, necessitating the development of
frameworks and regulations to ensure responsible AI deployment. Looking forward,
trends such as explainable AI and federated learning offer promise, but challenges
persist, including the need for greater interpretability and robustness in AI systems.
Ultimately, interdisciplinary collaboration, ethical governance, and continued innovation
will be crucial in harnessing AI's potential while mitigating its risks for the benefit of
society.

5.0 Actual Methodology Followed.

▪ First of all we select the topic of micro-project.


▪ Then we collect the required information to complete the project.
▪ Then give the laptop /computer and prepare report on project.

6
Microproject

On

Prepare a report on Artificial Intelligence

Submitted in the partial fulfillment of the requirement for the diploma

Of

Diploma in Computer Technology

ARMAN SHAIKH

7
Aim
The aim of a case study about artificial intelligence (AI) can vary based on the specific
context, industry, and objectives. Here are several potential aims for a case study on AI:

Demonstrating ROI: Showcase how implementing AI technologies has led to measurable


returns on investment for a particular business or industry. This could include cost savings,
revenue generation, or efficiency improvements.

Highlighting Use Cases: Illustrate various real-world applications of AI across different


sectors such as healthcare, finance, retail, manufacturing, or transportation. Show how AI
is being used to solve specific problems or improve processes.

Addressing Challenges: Investigate challenges encountered during AI implementation,


such as data quality issues, ethical concerns, regulatory compliance, or resistance from
employees. Analyse how these challenges were overcome or mitigated.

Comparing Approaches: Compare different AI technologies, algorithms, or


implementation strategies to determine which ones are most effective in achieving specific
goals. This could involve comparing supervised vs. unsupervised learning, traditional
machine learning vs. deep learning, or different AI platforms.

Exploring Ethical Implications: Examine the ethical considerations associated with AI


adoption, such as bias in algorithms, privacy concerns, job displacement, or the impact on
marginalized communities. Discuss how organizations are addressing these ethical
challenges.

Assessing Future Trends: Predict future trends in AI adoption and innovation based on
current case studies and industry developments. Discuss potential opportunities and
challenges that may arise as AI continues to evolve.

Educational Purposes: Provide a learning resource for students or professionals interested in


understanding how AI is applied in practice. Break down complex concepts into easily
understandable examples and explanations.
AI projects.

8
Introduction
According to some, artificial intelligence is the most promising development for the future.
From curing cancer to resolving the global hunger crisis, artificial intelligence is being
presented as the solution to all of our problems. Others, however, regard it as a threat
artificial intelligence may potentially give rise to unemployment and inequality, and could
even jeopardize the continued existence of humankind. As the technology entrepreneur
Elon Musk put it: “The benign scenario is that artificial intelligence can do any job that
humans do – but better.” Deloitte has positioned itself on the optimistic side of that
spectrum. “We believe that artificial intelligence will be extremely helpful to us and to our
clients”, says Richard Roovers, a partner at Deloitte Netherlands and Innovation Lead
Transformational Solutions North-West Europe. Artificial intelligence will enable us to
solve problems that humans are unable, or hardly capable, of solving, explains Richard.
“Artificial intelligence is capable of processing massive quantities of data and has the
ability to discover patterns that even the smartest mathematicians are unable to find. That
in itself opens up a large number of new possibilities.” Those new possibilities are what
this book is about. The case studies provide an overview of the ways in which Deloitte is
working to develop applications incorporating artificial intelligence – both internally and
for use with clients. The applications are diverse, make use of different technologies and
can be found in a diverse range of industries. This shows that aside from all of the
predictions for the future, artificial intelligence has already been a reality in the business
sector for some time and forms a resource that could possibly provide your company with
a decisive lead. The concept of AI dates back several decades, but recent breakthroughs in
computational power, big data, and algorithmic sophistication have propelled its rapid
evolution. Today, AI applications permeate nearly every aspect of our lives, from virtual
assistants on our smartphones to predictive algorithms powering financial markets, from
personalized recommendations on streaming platforms to autonomous vehicles navigating
city streets. One of the key drivers behind the proliferation of AI is its ability to analyze
vast amounts of data with remarkable speed and accuracy, uncovering insights and patterns
that would be virtually impossible for humans to discern. This data-driven approach has
unlocked new opportunities for businesses to enhance decision-making, optimize
processes, and create personalized experiences for customers. However, the widespread
adoption of AI also raises important ethical, societal, and economic considerations.
Concerns about data privacy, algorithmic bias, job displacement, and the concentration of
power in the hands of a few tech giants have sparked debates about the responsible
development and deployment of AI technologies. In this age of rapid technological change,
understanding the capabilities, limitations, and implications of AI is more important than
ever. This introduction sets the stage for exploring the multifaceted landscape of artificial
intelligence, delving into its applications, challenges, opportunities, and the profound
impact it continues to have on our world.
9
Machine Learning
Machine learning is the study of programs that can improve their performance on a given
task automatically.[41] It has been a part of AI from the beginning.[e]
There are several kinds of machine learning. Unsupervised learning analyzes a stream of
data and finds patterns and makes predictions without any other guidance. [44] Supervised
learning requires a human to label the input data first, and comes in two main
varieties: classification (where the program must learn to predict what category the input
belongs in) and regression (where the program must deduce a numeric function based on
numeric input).[45] In reinforcement learning the agent is rewarded for good responses and
punished for bad ones. The agent learns to choose responses that are classified as
"good".[46] Transfer
learning is when the
knowledge gained from one
problem is applied to a new
problem.[47] Deep
learning is a type of machine
learning that runs inputs
through biologically
inspired artificial neural
networks for all of these
types of learning.[48]
Computational learning
theory can assess learners
by computational
complexity, by sample
complexity (how much data
is required), or by other notions of optimization.

10
Natural language processing
Natural language processing (NLP) allows programs to read, write and communicate in
human languages such as English. Specific problems include speech recognition, speech,
synthesis, machine translation, information extraction, information retrieval and question
answering.[51] Early work, based on Noam Chomsky's generative grammar and semantic
networks, had difficulty with word-sense disambiguation[f] unless restricted to small
domains called "micro-worlds" (due to the common sense knowledge problem). Margaret
Master man believed that it was meaning, and not grammar that was the key to
understanding languages, and that thesauri and not dictionaries should be the basis of
computational language structure. Modern deep learning techniques for NLP include word
embedding (representing words, typically as vectors encoding their
meaning), transformers (a deep learning architecture using an attention ), and others. In
2019, generative pre-trained transformer (or "GPT") language models began to generate
coherent text, and by 2023 these
models were able to get human-
level scores on the bar
exam, SAT test, GRE test, and
many other real-world
application. Natural language
processing (NLP) is a subfield
of Artificial Intelligence (AI).
This is a widely used
technology for personal
assistants that are used in
various business fields/areas. This technology works on the speech provided by the user
breaks it down for proper understanding and processes it accordingly. This is a very
recent and effective approach due to which it has a really high demand in today’s market.
Natural Language Processing is an upcoming field where already many transitions such
as compatibility with smart devices, and interactive talks with a human have been made
possible. Knowledge representation, logical reasoning, and constraint satisfaction were
the emphasis of AI applications in NLP. Here first it was applied to semantics and later
to grammar. In the last decade, a significant change in NLP research has resulted in the
widespread use of statistical approaches such as machine learning and data mining on a
massive scale. The need for automation is never-ending courtesy of the amount of work
required to be done these days. NLP is a very favorable, but aspect when it comes to
automated applications.

11
Deep learning
Deep learning[108] uses several layers of neurons between the network's inputs and outputs.
The multiple layers can progressively extract higher-level features from the raw input. For
example, in image processing, lower layers may identify edges, while higher layers may
identify the concepts relevant to a human such as digits or letters or faces. [110]
Deep learning has profoundly improved the performance of programs in many important
subfields of artificial intelligence, including computer vision, speech recognition, natural
language processing, image classification[111] and others. The reason that deep learning
performs so well in so many applications is not known as of 2023. [112] The sudden success
of deep learning in 2012–2015 did not occur because of some new discovery or theoretical
breakthrough (deep neural networks and backpropagation had been described by many
people, as far back as the 1950s)[i] but because of two factors: the incredible increase in
computer power (including the hundred-fold increase in speed by switching to GPUs) and
the availability of vast amounts of training data, especially the giant curated datasets used
for benchmark testing, such as ImageNet.[j] By strict definition, a deep neural network, or
DNN, is a neural network with three or more layers. In practice, most DNNs have many
more layers. DNNs are trained on large amounts of data to identify and classify
phenomena, recognize patterns and relationships, evaluate possibilities, and make
predictions and decisions. While a single-layer neural network can make useful,
approximate predictions and decisions, the additional layers in a deep neural network help
refine and optimize those outcomes for greater accuracy. Deep learning drives
many applications and services that improve automation, performing analytical and
physical tasks without human intervention. It lies behind everyday products and
services—e.g., digital assistants, voice-enabled
TV remotes, credit card fraud detection—as
well as still emerging technologies such as self-
driving cars and generative AI.

12
Chat GPT

Generative pre-trained transformers (GPT) are large language models that are based on the
semantic relationships between words in sentences (natural language processing). Text-
based GPT models are pre-trained on a large corpus of text which can be from the internet.
The pre-training consists in predicting the next token (a token being usually a word, sub
word, or punctuation). Throughout this pre-training, GPT models accumulate knowledge
about the world, and can then generate human-like text by repeatedly predicting the next
token. Typically, a subsequent training phase makes the model more truthful, useful and
harmless, usually with a technique called reinforcement learning from human
feedback (RLHF). Current GPT models are still prone to generating falsehoods called
"hallucinations", although this can be reduced with RLHF and
quality data. They are used in chatbots, which allow you to ask a
question or request a task in simple text. Current models and
services include:
Gemini

(formerlyBard), ChatGPT, Grok, Claude, Copilot and LLaMA. Multimodal GPT models
can process different types of data (modalities) such as images, videos, sound and text.

13
Example’s
AI-Powered Medical Imaging Analysis
Background: XYZ Hospital, a leading healthcare institution, is facing challenges in
efficiently analyzing medical images such as X-rays, MRIs, and CT scans. The manual
interpretation of these images by radiologists is time-consuming and can lead to delays in
diagnosis and treatment. To address this issue, the hospital decides to implement an AI-
powered medical imaging analysis system.

Objective: The objective of this case study is to showcase how AI technology can
enhance the efficiency and accuracy of medical image analysis, leading to improved
patient outcomes and operational efficiency for healthcare providers.
Implementation:

Data Collection and Preprocessing:


XYZ Hospital gathers a large dataset of anonymized medical images along with
corresponding diagnoses from its electronic health records (EHR) system.
The images are preprocessed to enhance clarity and remove artifacts, ensuring optimal
input for the AI algorithms.

14
Algorithm Development:
Data scientists and medical experts collaborate to develop machine learning algorithms
capable of detecting and classifying abnormalities in medical images.
Deep learning techniques, such as convolutional neural networks (CNNs), are utilized
due to their effectiveness in image recognition tasks.

Training and Validation:


The AI models are trained using the annotated dataset, with emphasis on diverse cases
and rare conditions to ensure robustness.
Cross-validation techniques are employed to evaluate model performance and fine-tune
parameters for optimal results.

Integration into Clinical Workflow:


The trained AI models are integrated into the hospital's Picture Archiving and
Communication System (PACS), allowing seamless integration with existing
infrastructure.
Radiologists are provided with a user-friendly interface that enables them to upload
images for analysis and receive AI-generated insights alongside their own interpretations.

Improved Efficiency:
The AI system significantly reduces the time required for image analysis, allowing
radiologists to prioritize complex cases and make quicker treatment decisions.
Turnaround times for reporting are reduced, leading to faster diagnosis and treatment
initiation for patients.

Enhanced Accuracy:
AI algorithms demonstrate high accuracy in detecting abnormalities, including subtle
signs that may be overlooked by human observers.
The combination of AI-driven insights and radiologist expertise leads to more
comprehensive and accurate diagnoses.

Cost Savings:
By streamlining the imaging analysis process, the hospital realizes cost savings through
increased efficiency and reduced reliance on manual labor.
Challenges and Considerations:

Data Quality and Bias:


Ensuring the representativeness and quality of the training data is essential to avoid bias
and ensure the generalizability of the AI models.
15
Ongoing monitoring and validation are necessary to identify and address any biases that
may emerge over time.

Regulatory Compliance:
Compliance with healthcare regulations, such as HIPAA in the United States, requires
careful handling of patient data to ensure privacy and security.
Ethical and Legal Implications:
The use of AI in medical decision-making raises ethical considerations regarding
accountability, transparency, and patient consent.
Clear guidelines and protocols must be established to address these concerns and ensure
responsible AI deployment.

References

✓ www.greeksforgreeks.org
✓ https://www.tutorialspoint.com/
✓ https://en.wikipedia.org/
✓ www.slideshare.net
✓ www.copilot.com

16
- AI is a branch of computer science that deals with the creation of intelligent agents,
which are systems that can reason, learn, and act autonomously. AI research has been
highly successful in developing effective techniques for solving a wide range of
problems, from game playing to medical diagnosis.

History of AI

The concept of intelligent machines has been around for centuries, but the field of AI as we
know it today emerged in the mid-20th century. Alan Turing's 1950 paper, "Computing
Machinery and Intelligence," introduced the Turing test, a proposed test of a machine's ability
to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human.

In the following decades, AI research experienced periods of both progress and stagnation.
The early enthusiasm for AI was dampened by the limitations of computing power and the
difficulty of developing algorithms that could effectively learn and reason. However, in recent
years, advances in machine learning and deep learning have led to a resurgence of interest in
AI.

Core Concepts of AI

There are several key concepts that underpin the field of AI:

• Machine Learning: Machine learning algorithms allow computers to learn from data
without being explicitly programmed. This enables them to identify patterns and make
predictions based on new data.
• Deep Learning: Deep learning is a subfield of machine learning that uses artificial
neural networks, which are inspired by the structure and function of the human brain.
17
Deep learning algorithms have been particularly successful in tasks such as image
recognition and natural language processing.
• Natural Language Processing (NLP): NLP is a field of computer science that deals
with the interaction between computers and human language. NLP algorithms enable
computers to understand and generate human language, which is essential for tasks
such as machine translation and chatbots.
• Computer Vision: Computer vision is a field of computer science that deals with the
extraction of information from images and videos. Computer vision algorithms enable
computers to "see" and understand the world around them, which is essential for tasks
such as self-driving cars and facial recognition.
• Robotics: Robotics is a field of engineering that deals with the design, construction,
operation, and application of robots. Robots are machines that can sense their
environment and take actions in the world. AI plays an increasingly important role in
robotics, as robots become more sophisticated and capable of autonomous behavior.

Applications of AI

AI is already being used in a wide range of applications, and its impact is only going to grow in
the coming years. Here are some examples of how AI is being used today:

• Healthcare: AI is being used to develop new diagnostic tools, personalize treatment


plans, and even perform surgery.
• Finance: AI is being used to detect fraud, manage risk, and provide personalized
financial advice.
• Transportation: AI is being used to develop self-driving cars, optimize traffic flow, and
improve logistics.
• Manufacturing: AI is being used to automate tasks, improve quality control, and
optimize production processes.
• Customer Service: AI is being used to develop chatbots that can answer customer
questions and provide support.
• Entertainment: AI is being used to develop more realistic video games and create
personalized recommendations for movies and music.

Benefits of AI

AI has the potential to bring about a wide range of benefits for society. Some of the potential
benefits of AI include:

• Increased productivity and efficiency: AI can automate tasks that are currently
performed by humans, freeing up human time and resources for other activities.
• Improved decision-making: AI can analyze large amounts of data to identify patterns
and trends that humans might miss. This can lead to better decision-making in a variety
of areas.
• Enhanced innovation: AI can be used to develop new products and services that
would not be possible without it.
• Improved quality of life: AI can be used to address some of the world's most pressing
challenges, such as poverty, disease, and climate change.

18
Challenges of AI

While AI has the potential to bring about many benefits, there are also some challenges that
need to be addressed. Some of the challenges of AI include:

• Job displacement: As AI becomes more sophisticated, it is likely to automate many


jobs that are currently performed by humans. This could lead to widespread
unemployment and social unrest.
• Ethical considerations: AI systems can be biased, and there is a risk that they

19
Conclusion

In conclusion, the case study exemplifies the profound impact of artificial intelligence (AI)
across industries, showcasing its transformative potential in healthcare. Through the
implementation of an AI-powered medical imaging analysis system, XYZ Hospital has
achieved remarkable improvements in efficiency, accuracy, and cost-effectiveness.

AI technologies, particularly machine learning algorithms like convolutional neural


networks (CNNs), have enabled healthcare providers to analyze medical images with
unprecedented speed and accuracy. This has led to faster diagnosis, treatment initiation,
and ultimately, improved patient outcomes.

However, the successful deployment of AI in healthcare also highlights important


considerations and challenges. Ensuring the quality and representativeness of training data,
complying with regulatory requirements such as HIPAA, and addressing ethical and legal
implications are critical aspects of responsible AI implementation.

As AI continues to advance and permeate various sectors, including healthcare, it is


imperative that organizations remain vigilant in addressing emerging challenges and
opportunities, leveraging AI technologies to drive innovation, efficiency, and positive
societal impact.

Thank You….

20

You might also like