Professional Documents
Culture Documents
Ai CW2
Ai CW2
Ai CW2
Artificial Intelligence
Course work 2
Using JavaScript
Group members:
We express our deep appreciation to the individuals comprising our Artificial Intelligence Group
for their exceptional input and collaborative approach in this project. Their devoted passion,
competence, and team synergy have greatly contributed to its successful achievement.
We would also like to recognize each team member's unique contributions. Their unique
abilities, knowledge, and creative thinking have significantly improved our artificial intelligence
solution.
We would like to thank Mr. Ethayaraja, a visiting lecturer at the Esoft Metro Campus, for giving
us appropriate assignment guidance over the course of numerous conversations. We would like
to thank everyone who has contributed to the writing of this assignment, both directly and
indirectly. A big thank you as well goes out to Mr. Nuhman, our degree coordinator, who helped
us with our tasks.
Additionally, we would like to thank the institution. Their infrastructure, resources, and
confidence in our abilities have given us the groundwork we need to succeed.
Vendors have been rushing to highlight how AI is used in their goods and services as AI buzz
has grown. Frequently, what they classify as AI is just a part of the technology, like machine
learning. For the creation and training of machine learning algorithms, AI requires a foundation
of specialized hardware and software. Python, R, Java, C++, and Julia all offer characteristics
that are well-liked by AI engineers, yet no one programming language is exclusively associated
with AI.
A vast volume of labeled training data is typically ingested by AI systems, which then examine
the data for correlations and patterns before employing these patterns to forecast future states. By
studying millions of instances, an image recognition tool can learn to recognize and describe
objects in photographs, just as a chatbot that is given examples of text can learn to produce
lifelike dialogues with people. Generative AI approaches can produce realistic text, graphics,
music, and other media.
Learning. This aspect of AI programming focuses on acquiring data and creating rules
for how to turn it into actionable information. The rules, which are called algorithms,
provide computing devices with step-by-step instructions for how to complete a specific
task.
Weak and strong artificial intelligence fall into two different types. Weak artificial intelligence
is represented by a system that is built to do a single task. Video games like the chess example
from above and personal assistants like Apple's Siri and Amazon's Alexa are examples of weak
AI systems. The assistant responds to your question by providing an answer.
Systems with strong artificial intelligence can do tasks that are thought to be human-like. These
have a tendency to be more intricate and difficult systems. They are programmed to deal with
circumstances when problem-solving may be necessary without human intervention. These kinds
of technology are used in applications like self-driving automobiles and operating rooms in
medical facilities.
Figure 2 Applications of AI
Following are some sectors which have the application of Artificial Intelligence:
1. AI in Astrophysics
Complex problems in the universe can often be solved extremely effectively by artificial
intelligence. AI technology can be useful for understanding the universe, including its origin and
workings.
2. AI in Healthcare
In the recent five to ten years, AI has become more beneficial for the healthcare sector and is
expected to have a big impact on this sector. AI is being used in the healthcare sector to diagnose
patients more quickly and accurately than humans. AI can assist doctors with diagnoses and also
alert them when patients' conditions deteriorate so that treatment can be administered before the
patient is hospitalized.
AI can be utilized in video games. The AI machines are capable of playing strategic games like
chess, which require a lot of creative thinking on the part of the machine.
4. Finance using AI
The finance and AI industries make the ideal partners. Automation, chatbots, adaptive
intelligence, algorithm trading, and machine learning are all being applied to financial processes
in the finance sector.
Every business must prioritize data security, and in the digital age, cyberattacks are increasing
significantly. Your data can be made more secure and safe with the help of AI. Examples like the
AEG bot and the AI2 Platform are used to identify software bugs and cyber-attacks more
accurately.
There are billions of user profiles on social media platforms like Facebook, Twitter, and
Snapchat, all of which need to be saved and handled very effectively. Massive volumes of data
can be managed and organized by AI. A lot of data may be analyzed by AI to find the newest
hashtags, trends, and user requirements.
The demand for AI in the tourism industry is growing rapidly. AI is capable of doing a variety of
travel-related tasks, including planning trips and recommending hotels, flights, and the best
routes to clients. AI-powered chatbots are being used in the travel industry to communicate with
clients in a human-like manner for better and quicker responses.
The concept of artificial creatures, mechanical men, and other automatons existing or having the
potential to exist was first raised by thinkers in antiquity, which is when artificial intelligence
first emerged.
Throughout the 1700s and beyond, early thinkers helped artificial intelligence become more real.
Philosophers wondered whether it was possible to artificially automate and control non-human
machine intelligence. Classical philosophers, mathematicians, and logicians first investigated the
mechanical manipulation of symbols, which eventually sparked interest in AI and led to the
development of the Atanasoff Berry Computer (ABC), the first programmable digital computer,
in the 1940s. The development of an "electronic brain," or artificially intelligent being, was
spurred on by this particular technology.
Before AI icons contributed to our current understanding of the topic, about ten years had
passed. Alan Turing, a mathematician among other things, created a test that determined how
well a machine could mimic human behavior. The phrase "artificial intelligence" was first used
by computer and cognitive scientist John McCarthy in the middle of that decade during a
summer conference held at Dartmouth College.
Many researchers, programmers, logicians, and theorists contributed to the development of the
current understanding of artificial intelligence starting in the 1950s. Each decade brought new
discoveries and inventions that altered people's fundamental understanding of artificial intelligence
and how historical developments have propelled AI from an impractical fantasy to a viable
possibility for both the present and the future. (g2, 2023)
Figure 3 History of AI
1950s:
Alan Turing suggests the "Turing Test" in 1950 to determine whether a machine is
capable of displaying intelligent behavior.
In 1956, the Dartmouth Conference saw the beginning of the area of artificial
intelligence, which aimed to create "thinking machines."
1960s-1970s:
1965: Joseph Weizenbaum creates ELIZA, a computer software that mimics conversation
using primitive language processing and pattern matching.
Edward Feigenbaum and Joshua Lederberg establish the idea of "expert systems" in 1966.
The limitations of AI research are criticized in The Lighthill Report in 1973, which
results in decreased funding and a "AI winter."
1980s-1990s:
1990: The field of machine learning gains popularity, with researchers focusing on
algorithms capable of improving performance through experience.
The autonomous vehicle Stanley wins the DARPA Grand Challenge in 2005.
2012: With the success of deep neural networks in the ImageNet competition, deep
learning attracts more attention.
2018 saw the release of GPT, a ground-breaking language model from OpenAI.
In the year 2019, DeepMind's AlphaStar becomes the first AI to master StarCraft II at the
Grandmaster level.
2020s:
Release of the highly sophisticated language model GPT-3 in 2021, which can produce
writing that is contextually appropriate and cohesive.
According to some, machine learning is a branch of artificial intelligence that focuses primarily
on creating algorithms that enable a computer to independently learn from data and previous
experiences.
Arthur Samuel coined the phrase "machine learning" in 1959. In a nutshell, we may say that it is:
Machine learning algorithms create a mathematical model with the aid of historical sample data,
or "training data," that aids in making predictions or judgments without being explicitly
programmed. Computer science and statistics are used with machine learning to create prediction
models. Algorithms that learn from past data are created by machine learning or used in it. The
performance will be higher the more information we supply. If additional data can be gathered to
help a machine perform better, it can learn.
This kind of machine learning (ML) uses supervision, where computers are trained on labeled
datasets and allowed to make predictions based on the training data. According to the labeled
dataset, some input and output parameters have already been mapped. Consequently, the input
and related output are used to train the machine. In later stages, a tool is created to forecast
the result using the test dataset.
Think about an input dataset that contains photos of parrots and crows. The computer is initially
trained to recognize the images, including the color, shape, and size of the parrot and crow's
eyes. After training, an image of a parrot is used as input, and the computer is supposed to
recognize the object and forecast the result. To arrive at a final forecast, the trained machine
looks for the many characteristics of the object in the input image, such as color, eyes, shape, etc.
In supervised machine learning, object identification works like this.
Classification: These are algorithms that deal with categorical output variables, such as
yes or no, true or false, male or female, etc., in classification situations. Spam detection
and email filtering are two real-world uses of this area.
The Random Forest Algorithm, Decision Tree Algorithm, Logistic Regression Algorithm, and
Support Vector Machine Algorithm are a few examples of well-known classification methods.
Regression: Regression algorithms solve regression issues where the connection between
the input and output variables is linear. These have a reputation for forecasting
continuous output variables. Examples include market trend research and weather
forecasting.
The Simple Linear Regression Algorithm, Multivariate Regression Algorithm, Decision Tree
Algorithm, and Lasso Regression are all common regression algorithms.
Unsupervised learning is a learning method where no supervision is provided. In this case, the
machine is trained using an unlabeled dataset and is given the ability to predict the results
independently. Unsupervised learning algorithms attempt to classify the input's similarities,
differences, and patterns into groups that correspond to the unsorted dataset.
Take, for instance, a collection of input photos showing a fruit-filled container. The machine
learning model in this case is unfamiliar with the photos. When we feed the dataset into the
machine learning (ML) model, the model's job is to categorize the objects in the input photos
based on their characteristics, such as their color, form, or differences. After categorization, the
machine predicts the result while being put to the test against a test dataset.
Objects are grouped into clusters using the clustering approach based on criteria like
similarities or discrepancies between the objects. Putting clients into groups based on the
goods they buy, for instance.
3. Reinforcement learning
Unlike supervised learning, reinforcement learning lacks labeled data, and the agents learn via
experiences only. Consider video games. Here, the game specifies the environment, and each
move of the reinforcement agent defines its state. The agent is entitled to receive feedback via
punishment and rewards, thereby affecting the overall game score. The ultimate goal of the agent
is to achieve a high score.
Many artificial intelligence (AI) apps and services are powered by deep learning, which
enhances automation by carrying out mental and physical tasks without the need for human
intervention. Both established products and services (including digital assistants, voice-activated
TV remote controls, and credit card fraud detection) as well as cutting-edge innovations (like
self-driving automobiles) are powered by deep learning technology. (ibm, 2023)
Deep neural networks are made up of many layers of interconnected nodes, each of which
improves upon the prediction or categorization made by the one underneath it. Forward
propagation refers to the movement of calculations through the network. A deep neural network's
visible layers are its input and output layers. The deep learning model ingests the data for
processing in the input layer, and the final prediction or classification is performed in the output
layer.
Backpropagation is a different method that uses techniques like gradient descent to calculate
prediction errors before changing the function's weights and biases by iteratively going back
through the layers in an effort to train the model. A neural network can make predictions and
make necessary corrections for any faults thanks to forward propagation and backpropagation
working together. The algorithm continuously improves in accuracy over time.
In the simplest terms possible, the aforementioned summarizes the simplest kind of deep neural
network. To solve certain issues or datasets, there are various forms of neural networks, but deep
learning techniques are highly complex. For example,
Convolutional neural networks (CNNs), used primarily in computer vision and image
classification applications, can detect features and patterns within an image, enabling tasks, like
object detection or recognition. In 2015, a CNN bested a human in an object recognition
challenge for the first time.
Recurrent neural networks (RNNs) are typically used in natural language and speech
recognition applications as it leverages sequential or times series data.
Real-world deep learning applications are a part of our daily lives, but in most cases, they are so
well-integrated into products and services that users are unaware of the complex data processing
that is taking place in the background. Some of these examples include the following:
Law enforcement
Deep learning algorithms can evaluate transactional data and learn from it to spot risky trends
that could be signs of fraud or other illegal conduct. By extracting patterns and evidence from
sound and video recordings, images, and documents, speech recognition, computer vision, and
other deep learning applications can increase the efficiency and effectiveness of investigative
analysis. This aids law enforcement in quickly and accurately analyzing massive amounts of
data.
monetary services
Customer assistance
Deep learning technology is used widely in businesses' customer care procedures. A simple type
of AI is chatbots, which are utilized in many different applications, businesses, and customer
support websites. Traditional chatbots, which are frequently found in menus resembling call
centers, use natural language and even facial recognition. However, more advanced chatbot
solutions try to ascertain whether there are many answers to ambiguous questions through
machine learning. The chatbot then attempts to immediately respond to these inquiries or direct
the interaction to a human user depending on the responses it has received.
The first expert system (ES), which was the first effective use of artificial intelligence, was
established in the year 1970 and is a subset of AI. By drawing on the knowledge that is kept in
its knowledge base, it can solve even the most complicated problems like an expert. Like a
human expert, the system aids in decision-making for complex issues by using both facts and
heuristics. It is so named because it possesses in-depth knowledge of a certain field and is
capable of resolving any challenging issue in that field. These systems are created for a certain
industry, like science, medical, etc.
The knowledge that an expert system has stored in its knowledge base determines how well it
performs. The performance of the system increases as more knowledge is kept in the KB. The
Google search box's recommendation of spelling problems is one of the typical examples of an
ES.
MYCIN: It was one of the earliest backward chaining expert systems that was designed to find
the bacteria causing infections like bacteraemia and meningitis. It was also used for the
recommendation of antibiotics and the diagnosis of blood clotting diseases.
PXDES: It is an expert system that is used to determine the type and level of lung cancer. To
determine the disease, it takes a picture from the upper body, which looks like the shadow. This
shadow identifies the type and degree of harm.
CaDeT: The CaDet expert system is a diagnostic support system that can detect cancer at early
stages.
User Interface
Inference Engine
Knowledge Base
The expert system communicates with the user through a user interface, receives queries as input
in a readable format, and sends those queries to the inference engine. It displays the output to the
user after receiving a response from the inference engine. In other words, it's an interface that
enables a user who lacks technical expertise to consult an expert system to solve a problem.
Since it serves as the system's primary processing component, the inference engine is referred to
as the expert system's "brain." It uses the knowledge base and inference rules to draw
conclusions or infer new information. It aids in determining an error-free response to user
requests.
The system pulls knowledge from the knowledge base with the aid of an inference engine.
Deterministic inference engine: Inferences made using this kind of inference engine are
presumptively correct. It is founded on reality and laws.
The following modes are used by the inference engine to obtain the solutions:
Forward Chaining: It begins with the established facts and laws, then applies the
inference laws to incorporate their conclusion into the established facts.
Backward Chaining is a technique for backward reasoning in which the aim is established
and the known facts are then proven.
The knowledgebase is a form of data repository that houses knowledge gathered from
several subject-matter specialists in a certain field. It is regarded as a big knowledge
repository. The Expert System will be more accurate the larger the knowledge base.
It is comparable to a database that holds data and guidelines for a specific field or subject.
The knowledge base can alternatively be seen as a collection of things and their qualities.
A lion, for example, is an object with the characteristics of being a mammal and not a
domestic animal, among others.
Heuristic knowledge is built on experience, evaluation, practice, and the capacity to make
educated guesses.
Knowledge Representation:
If-else rules are used to codify the knowledge that is kept in the knowledge base.
They can be utilized in dangerous locations where people are not safe.
Since these systems are unaffected by feelings, tension, or exhaustion, their functioning is
constant.
If the knowledge base contains inaccurate information, the expert system's response
could be incorrect.
It is unable to provide creative output for many situations, much like a human individual.
One of the key restrictions is the need for a unique ES for each domain.
Since it is unable to learn from its mistakes, manual updates are necessary.
Another difficulty is context understanding, which needs to be tackled through semantic analysis
for machine learning to succeed. Instead of just recognizing literal meanings, natural language
understanding (NLU), a component of natural language processing (NLP), deals with these
complexities through machine reading comprehension. The goal of NLP and NLU is to enable
computers to comprehend human language sufficiently well to engage in genuine conversation.
Streamlining the recruiting process on sites like LinkedIn by scanning through people’s
listed skills and experience.
Language models like autocomplete which are trained to predict the next words in a text,
based on what has already been typed.
The more we write, speak, and converse with computers, the better they get at all of these tasks
because they are constantly learning. A feature like Google Translate, which makes use of a
program called Google Neural Machine Translation (GNMT), is a nice illustration of this
iterative learning. GNMT is a method that uses a sizable artificial neural network to operate in
order to improve accuracy and fluency across languages. GNMT tries to translate complete
sentences rather than one sentence at a time. Because it searches through millions of samples,
GNMT determines the most pertinent translation by considering a wider context.
Additionally, rather than developing a unique global interlingua, it looks for similarities among
many languages. Unlike the original Google Translate which used the lengthy process of
translating from the source language into English before translating into the target language,
GNMT uses “zero-shot translate” – translating directly from source to target.
Google Translate may not be good enough yet for medical instructions, but NLP is widely used
in healthcare. It is particularly useful in aggregating information from electronic health record
systems, which is full of unstructured data. Not only is it unstructured, but because of the
challenges of using sometimes clunky platforms, doctors’ case notes may be inconsistent and
will naturally use lots of different keywords. NLP can help discover previously missed or
improperly coded conditions.
It is the process of producing meaningful phrases and sentences in the form of natural language
from some internal representation.
It involves −
Text planning − It includes retrieving the relevant content from knowledge base.
(tutorialspoint, 2021)
Through emails, product reviews, social media posts, surveys, and other forms of
communication, NLP tools assist businesses in understanding how their clients regard them.AI
solutions may be used to automate monotonous and time-consuming processes, boost
productivity, and free up employees to focus on more rewarding work, in addition to helping
businesses comprehend online interactions and how customers talk about them.
Through the identification of emotions in texts, opinions are categorized as either positive,
negative, or neutral. By pasting words into this free sentiment analysis tool, you can observe how
it functions.
Businesses can learn more about how consumers feel about brands or products by examining
social media posts, product reviews, or online polls. For instance, you could instantly identify
irate consumer remarks by analyzing tweets mentioning your company.
To find out how customers feel about your level of customer service, you might wish to send out
a survey. You can learn which facets of your customer service elicit favorable or negative
feedback by examining open-ended NPS survey replies.
Language Translation
Over the previous few years, machine translation technology has advanced significantly, with
Facebook's translations reaching superhuman capability in 2019.
Businesses can interact in a variety of languages thanks to translation software, which can help
them expand into new markets or strengthen their worldwide communication.
Additionally, you may teach translation software to comprehend certain jargon used in any
industry, such as banking or medicine. Inaccurate translations, which are frequent with generic
translation tools, are thus not a concern.
Text Extraction
You can extract pre-defined information from text using text extraction. This tool assists you in
identifying and extracting pertinent keywords, features (such as product codes, colors, and
specs), and named entities (such as names of people, places, companies, emails, etc.) if you work
with vast amounts of data.
Chatbots
AI systems known as chatbots are created to communicate verbally or textually with humans.
Due to their capacity to provide 24/7 support (speeding up response times), manage several
inquiries at once, and free up human agents from answering repetitive questions, chatbots are
increasingly being used for customer service.
You can trust chatbots to complete routine and easy tasks because they actively learn from every
contact and get better at interpreting user intent. They will forward a client inquiry to a human
representative if they encounter one, they are unable to address.
Classification of Topics
You may classify unstructured text into categories by using topic classification. It's a terrific
approach for businesses to learn from customer feedback.
Consider that you want to examine hundreds of open-ended NPS survey responses. How many
comments refer to your customer service? How many consumers bring up "Pricing" in
conversation? Using this topic classifier for NPS feedback, you can quickly tag all of your data.
Topic classification can also be used to automatically tag incoming support tickets and forward
them to the appropriate individual.
The purpose of cognitive systems is not to address issues. It gains knowledge through past
experience and accumulated data. After that, they conduct an analysis of the data to create
unique strategies and solutions. Self-learning systems engage in real-time interaction with their
surroundings and use specifics to generate new ideas.
Artificial intelligence (AI) applications can more easily mimic human thought processes as a
result of cognitive computing. It enables you to develop original methods and solutions based on
your prior knowledge. According to CC, which goes beyond simple machine learning, a
computer acquires facts from a body of knowledge that can be accessed and recalled later. Based
on this, it evaluates the scenario and contrasts it with known facts. After that, it quickly provides
a recommendation. (clickworker, 2021)
The basic use case of Artificial Intelligence is to implement the best algorithm for solving a
problem. However, cognitive computing goes further to mimic human wisdom and intelligence
by studying a series of factors. Cognitive computing varies widely from Artificial Intelligence
in terms of concept.
Cognitive computing mimics and learns from the way humans think.
Cognitive computing, in contrast to artificial intelligence (AI) systems, may learn from the data
and patterns to recommend human-relevant activities based on their knowledge. When it comes
to AI, the program assumes total control and employs a pre-established algorithm to prevent a
specific scenario or conduct the required actions.
However, cognitive computing can be used in a variety of fields where it serves as an assistance
rather than the person who really completes a task.
Users that use cognitive computing can examine data more quickly and precisely without
worrying about making a mistake. Its fundamental objective is to support human decision-
making. It doesn't fully neglect humanity, unlike AI.
More:
Cognitive computing uses pattern recognition and machine learning to adapt and make the most
of the information, even when it is unstructured. To provide these benefits, cognitive computing
usually offers the following attribute.
Adaptive Learning: Cognitive systems accommodate an influx of rapidly changing data and
information, which helps in fulfilling the growing set of goals. It can process dynamic data in
real- time that modifies itself as per the data needs and surrounding needs.
Iterative and Stateful: CC identifies the issues by posing questions or taking out supplementary
data if a query is vague or incomplete. The technology ensures this by storing details about
potential scenarios and related situations.
Contextual: CC systems have to identify, gauge, and dig contextual data, such as domain,
syntax, time, requirements, or a particular user’s profile, tasks, and goal. The system draws data
from multiple sources of information, including visual, auditory, or sensor data. It also collects
information from structured and unstructured data.
Banking and Finance: Cognitive computing helps the banking sector increase revenue by
enhancing client engagement, operational effectiveness, and customer experience. Institutions in
the financial and banking sectors will change as a result of new analytics, more contextual
interaction, and business transformation.
Cybersecurity: Cognitive computing makes people less susceptible to manipulation and offers a
technical option to detect any false information and misleading data, which helps avoid
cyberattacks.
Healthcare: By using cognitive computing systems, medical personnel can choose better
treatments for patients. To work on human decision-making, it needs medical records, current
patient data, and other information.
Education: The system has the power to alter how universities and high schools work. Students
will receive individualized study materials that will aid in their coursework, thanks to cognitive
computing. Students will be assisted in grasping the crucial subject at their own pace.
Artificial neural networks or ANN are an artificial intelligence technique that is computationally
designed to imitate how a human brain works. It is made up of layers of artificial neurons. A
neural network will take some input and based on the values; it will output a prediction. This
prediction can be a classification, an identification, or a regression. (indatalabs, 2020)
Artificial neural networks provide the technical basis for the majority of the most commercially
viable AI fields. ANNs are excellent at identifying patterns in both image and audio information
as well as forecasting trends.
Algorithms used in artificial intelligence include neural networks. They are modeled after the
human brain and are capable of processing massive volumes of data, including voice inputs and
visual inputs, to enhance comprehension of the subject matter. When neural networks contain
several layers, they perform well.
As for machine learning, both ANN and ML models can predict or classify the output. The only
difference is that a machine learning model makes decisions based on the training data.
Therefore, both supervised and unsupervised ML models require human supervision. A neural
network requires less human intervention at the initial stages and can produce accurate decisions
with no manual effort.
The input layer is responsible for passing raw data into the network.
The hidden layers are where the computation takes place and then submit the result to the
output layer.
The kind of technology that is most frequently mentioned in relation to the AI revolution is
artificial neural networks. These have a feedforward pattern and are the most conventional sort
of neuronal organization. It denotes that ANNs only ever process inputs moving forward. The
earliest and most basic types of fundamental deep learning models are called artificial nets.
This kind of network is frequently used for non-sequential data that is unrelated to temporal
characteristics and falls under the category of supervised learning. As a result, pattern
classification, association, and mapping are all excellent applications for ANNs. Regarding the
input, this can be text, tabular, or image data.
One of the most well-known deep learning algorithms is the convolutional neural network, or
CNN or ConvNet. A model learns to carry out classification tasks on an image, video, text, or
sound in this sort of machine learning.
Although this example of an AI neural network resembles recurrent nets almost exactly, there is
a significant distinction. Recurrent nets are better at interpreting sequential data than CNNs are at
handling temporal data. Nevertheless, because they can offer an internal representation of a two-
dimensional image, CNNs are among the most well-liked models. This aptitude bodes especially
well for image analysis that takes location and scale features into consideration. Convolutional
nets are therefore the de facto industry standard for any kind of image data prediction task. They
need a lot more input data than ANNs do to achieve high accuracy rates.
GANs are a powerful type of network designed specifically for unsupervised learning. These can
automatically identify and pick up the patterns in input data to generate or output new examples
based on the original dataset.
The potential of GANs is massive, as they can mimic any data regularities. Thus, generative nets
can produce structures that are eerily similar to real-world creations, be it images, music, speech,
or prose. In some sense, generative adversarial networks are machine artists that can be also used
to predict risk and recovery in healthcare.
One of the difficult areas of AI study for a very long time has been facial detection. Cascade
classifiers were once utilized for this purpose, and they still are today, but deep learning has
completely changed the game. Although the initial cascade classifier evaluates quickly, it
struggles to distinguish faces from various angles. However, CNNs can detect faces in a variety
of angles. There are two ways to create facial recognition systems:
1. Use a pre-trained model-based solution like dlib, DeepFace, FaceNet, or another. This method
takes less time because it already has face recognition capabilities. By using machine learning
consulting, you can also calibrate the pre-made models.
2. Create a facial recognition system from scratch if you require capabilities for several uses. The
datasets will be bigger in this situation.
Whatever the method, facial recognition solutions all take the same route. They begin by
mapping facial landmarks from a picture or a video. To identify a match, they then compare the
outcomes with a database. The system is pre-trained on a vast collection of photos, such as the
MegaFace database, to achieve high recognition accuracy. The primary facial recognition
training method is this.
Cancer detection
One of the most cutting-edge uses of neural networks in artificial intelligence is in intelligent
medical imaging. Since the inadequacies of traditional screening techniques have existed for
years, early cancer identification has been virtually difficult. Today, small-scale anomalies can
be detected by AI-based technology, saving countless lives.
A deep learning algorithm developed by NCI researchers in 2019 can recognize cervical
precancers that need to be removed or treated. Additionally, several AI systems have shown to
improve the detection of precancerous growths. Newer breast cancer risk models forecast the
likelihood of breast cancer while preventing malignant tumors.
In total, it is anticipated that the market for medical imaging will increase to $56.53 billion by
2028. Deep learning will advance further because to the COVID-19 influence and the quick
development of technology, which will also position it as a potential medicinal innovation.
Autonomous robots
The use of artificial neural networks in robots may be a sci-fi reality, but it's growing in
popularity in 2022. The market for robotics worldwide was estimated to be worth $27 billion in
2020 and is expected to grow to $74.1 billion by 2026. However, a number of constraints, such
as effective power sources, trustworthy AI, and environment mapping, are inextricably linked to
robotics.
Robotics applications of artificial neural networks enable engineers to ease these difficulties.
Neural networks are so frequently employed in robotics to perform functions like navigation and
recognition. They are especially helpful in applications where a robot needs to be able to explore
its environment and interact with a variety of items since they may be trained to detect certain
objects or patterns.
Neural networks can also be utilized for motion planning and trajectory optimization, which
helps robots maneuver more effectively in challenging environments.
One of the many robots that view the world through neural networks is the starship robot. The
latter enables delivery bots to safely cross the road, avoid collisions with people, and stay on the
sidewalk. The business claims that obstacle identification is made possible using deep learning in
conjunction with sensors and radars, which surrounds the robot in a "situational awareness
bubble."
There are many different explanations of what machine vision is and how it operates. It's crucial
to remember that machine vision has nothing to do with image processing, a process whose
result is another image. In order to make sense of the data that is collected by machine vision,
information such as the identity, location, and orientation of the object(s) being captured by the
system are translated into a form of data.
Machine vision is described as "the ability of a computer to see; it employs one or more video
cameras, analogue-to-digital conversion (ADC), and digital signal processing (DSP)" by
SearchEnterpriseAI. Data from the process is sent to a computer or robot controller. The
complexity of machine vision is comparable to that of speech recognition. (aijourn, 2021)
1. Cameras
There is a range of different cameras available for a machine vision system with different
interfaces, pixels, resolutions, and features. The cameras are the primary piece of equipment for
inspecting the object or item in a machine vision system.
It might be that the system needs to use multiple cameras for a process which is referred to as
dual cameras. This will mean there are multiple cameras for one specific point of inspection and
checking to make sure an otherwise hidden part can be properly checked.
2. Smart cameras
A smart camera is required when a machine vision system needs to capture and extract
application- specific information from an image. A smart camera is capable of generating
descriptions and making decisions. A smart camera usually contains all necessary
communication interfaces as well as being connectable to Wi-Fi or a server to easily transfer
captured image data.
3. 3D cameras
3D cameras enable the depth of an item to be displayed in an image to show different angles of
the image and give an idea for the shape of an item. By using a 3D camera in the machine vision
system, it will allow different perspectives and depth perception.
A thermal imaging camera is a type of thermographic camera that renders images through
infrared radiation which shows areas of heat on the images.
5. Software
A machine vision system requires software to visualize the data and show what the cameras are
looking at for operators to analyse and maintain the system as well as programming the functions
of the hardware. There is different available software that can be matched for what the machine
vision system needs to do and what data needs to be visualized to operators from the system.
Page | 40 Artificial Intelligence Abdul Nasar Risha
Ahamadh
6. Embedded systems
Embedded systems for machine vision, which is also known as an imaging computer, is a camera
without housing or a frame that is directly connected to a processing board. This combines all
parts under one single board computer. Due to the increasing amount of open-source libraries for
machine learning and AI, more computer vision systems are being deployed as embedded
systems or IoT devices.
7. Frame grabbers
This is an electronic device that captures individual digital still frames from an analog video
signal or a digital video stream. It can be used as an add on to a machine vision system to capture
specific frames to analyse from a quick moving system.
8. Illuminators
These add light to the system so the camera has enough light to capture the image. Depending on
the detail required in the imaging will change what type of lighting is needed for the machine
vision system to identify what it needs to.
This course combines Artificial intelligence and JavaScript and takes you into the world of 2D
game development and creating 2D AI games. There are too many job opportunities in this field
today, and the scope is huge. Demand for AI programmers is increasing exponentially every day.
A straightforward web game software written in the JavaScript programming language is called
The Simple Space War Game. This project includes an advanced code script that will show the
gaming application's whole gameplay. There are numerous backdrop graphics and image sprites
in the game. Students enrolled in IT-related courses who desire to develop a web gaming
application can use this project to their advantage. It is a simple space war game written in
JavaScript. You can look at the source code and discover the various features that went into
creating this game. This simple space war game offers cutting-edge coding that makes use of
JavaScript programming for game development.
The JavaScript programming language was used to develop the Simple Space War Game. This
type of user-friendly application is open to modification. The application comes with a
sophisticated feature that makes it work. The keyboard shortcuts (W to go upward, S to move
downward, A to move left, D to move right, and Spacebar to shoot) can be used to play the
game. Your main goal in this straightforward and enjoyable game is to eliminate every alien
invader in space.
JavaScript games are entertaining, simple to make, and a fantastic method for youngsters to learn
how to code. Nearly all internet websites employ the popular programming language known as
JavaScript. A web application can be enhanced using JavaScript by adding animations and
interactivity that improve gaming and web browsing.
JavaScript's capacity to create games that can be played readily online is a common topic that
draws young people to learning how to program. It makes sense that more game developers have
been adopting JavaScript to create new content over the past 10 years as both internet
connections and computer hardware have improved. JavaScript games can be played in the
browser or mobile phone, so, if that’s your goal, it’s an excellent option. Using platforms and
tools can help create both 2D and 3D games that can run directly in your browser. Aside from
only web-based games, JavaScript has been increasing in popularity in mobile game
development.
The goal to give players an engaging, interactive, and entertaining experience is what drives
developers to create JavaScript games.
A platform for creativity, adventure, and discovery, JavaScript games let players enter virtual
worlds and escape reality. They demonstrate the possibilities and skills of JavaScript as a
programming language while also offering a way of amusement, leisure, and personal
development.
The lack of dynamic enemy behavior in the Space War Game currently results in repetitive
gameplay and little challenge for the players. This restricts player immersion and engagement,
which makes for a less enjoyable gaming experience.
Objectives:
By achieving these objectives, the project aims to create a more immersive and challenging
gameplay environment in the Space War Game through the effective use of AI concepts and
algorithms.
- Modify the `initData()` function to initialize enemy count and cooldown values.
- Increase enemy count when fuel is collected and decrease enemy count when fuel is depleted.
- Adjust the `onAdd` and `onRemove` event handlers of the `fuels` object to update the enemy
cooldown values accordingly.
2. Pathfinding Implementation:
- Integrate a pathfinding algorithm to enable enemies to navigate the game space and engage
the player strategically.
- Implement the selected algorithm to calculate optimal paths for enemies to pursue and attack
the player.
- Utilize Behavior Tree AI to create intelligent enemy behaviors and decision-making processes.
- Design a behavior tree structure with nodes representing different actions and conditions.
- Employ machine learning techniques to train neural networks for enemy AI.
- Define input features such as player position, enemy position, and game state.
5. Reinforcement Learning:
- Apply reinforcement learning algorithms to train AI agents that control enemy behavior.
- Define a reward system to incentivize desirable enemy actions and discourage unfavorable
behaviors.
- Train the AI agents by allowing them to interact with the game environment and learn
optimal strategies.
A flowchart is a graphical depiction of a series of steps. It is commonly used to depict the flow
of algorithms, workflows, or processes in a sequential sequence. A flowchart often depicts the
processes as various types of boxes, with arrows linking them in sequence.
P.E.A.S stands for Performance measure, Environment, Actuators, and Sensors. Here's how it
applies to the JavaScript space war game:
Performance Measure: The performance measure in the space war game could be based on
various factors, such as the player's score, the number of enemies destroyed, the player's survival
time, or any other metrics that determine the success or progress of the game.
Environment: The environment in the space war game refers to the virtual space where the
game takes place. It includes the game world, the positions of player and enemy spaceships,
obstacles, projectiles, and any other relevant entities or elements that make up the game
environment.
Actuators: Actuators are the means through which the agents in the game can interact with the
environment. In the space war game, the player's spaceship and enemy spaceships can act as
actuators. They can move, shoot projectiles, use power-ups, and perform other actions to
influence the game environment.
Sensors: Sensors are the mechanisms that allow the agents to perceive and gather information
about the game environment. In the space war game, sensors can include collision detection
mechanisms, position tracking, projectile detection, and other methods to observe and collect
data about the state of the game world.
By considering the P.E.A.S framework, you can design and analyze the AI aspects of the game
more effectively. It helps define the goals, understand the environment, determine the actions the
agents can take, and identify the sensory inputs required for intelligent decision-making in the
game.
Requirements research is concerned with efforts that evaluate the demands or requirements
Google Chrome
This game was created using web development languages such as Python, HTML, CSS, and JS.
The entire system was built using JavaScript. This application also has JavaScript UI for a more
stylish and user-friendly interface. We have used some AI concepts to build this game.
Agents:
In this game, various entities can be considered as agents. These include the player's spaceship,
enemy spaceships, and any other entities that exhibit autonomous behavior. Agents have their
own decision-making processes and interact with the game environment based on certain rules or
strategies.
Collision Detection:
Collision detection is a crucial aspect of the game's AI. It involves detecting and handling
collisions between different game entities, such as enemy ships, player bullets, and obstacles.
Collision detection algorithms help determine when two entities intersect and trigger appropriate
actions, such as damaging the player or destroying enemy ships.
Agents in your game make decisions based on various factors and conditions. They may use
decision-making algorithms, such as finite state machines or behavior trees, to determine their
actions. For example, enemies may decide to chase the player, attack the player, or evade
incoming fire based on their current states and the game situation.
The behavior of enemy spaceships in your game involves AI concepts. Enemies may exhibit
different behaviors, such as aggressive attacking, defensive maneuvers, or cooperative tactics.
Their behavior can be controlled using rule-based systems, state machines, or other AI
techniques to create dynamic and challenging gameplay.
Reactive AI refers to agents that react to their immediate surroundings and make decisions in
real- time based on sensory input. In your space war game, enemy spaceships may employ
reactive AI to respond to player movements, avoid collisions, or adjust their attack strategies
based on the current game state.
2. Collision Detection:
Function `playerCollision` for detecting collisions between the player and entities
3. Decision Making:
4. Enemy Behavior:
Enemy spaceships' behavior is defined in the `updateEmenys` function based on collision with the
player.
5. Reactive AI:
Enemy spaceships react to collision with the player in the `updateEmenys` function
To improve player immersion and engagement through enhanced AI-driven interactions, the
following code lines are relevant:
In the `updateEmenys()` method, the player's collision with enemies is detected using the
`playerCollision()` function. This interaction triggers score updates, fuel updates, shooting
updates, and other game-related actions, resulting in a more engaging gameplay experience.
By incorporating these interactions between the player and various game entities, the AI-driven
enemies and friends contribute to a more immersive and engaging gameplay experience for the
player.
Status – Success
Status – Success
Once you click onto the Start icon on the game interface will visible:
Read the game instructions, play the game and have some fun on your free time
Enhanced gameplay:
By incorporating AI concepts, you can introduce intelligent behaviors to the game's characters,
making them more challenging and engaging for the players. The AI can control enemy
spaceships and create a more immersive gaming experience.
AI can enable the enemy spaceships to exhibit intelligent decision-making abilities, such as
pathfinding, and strategic. This can make the gameplay more unpredictable and enjoyable, as
players face unique challenges in each encounter.
Adaptive difficulty:
Using AI, the game can analyze the player's performance and adjust the difficulty level
accordingly. This ensures that the game remains challenging enough to keep the players engaged
without becoming too easy or too difficult.
Personalized experience:
AI can learn from the player's actions and adapt the game's behavior accordingly. It can analyze
patterns, preferences, and playing styles, allowing for a more personalized experience tailored to
the individual player.
Implementing AI concepts in a simple game can serve as a learning tool for both developers and
players. It allows developers to experiment with AI algorithms and techniques, while players can
gain insights into the capabilities and limitations of AI systems.
Increased complexity:
Performance considerations:
Balancing issues:
Implementing AI-controlled enemies can be a delicate balancing act. If the AI is too weak, the
game becomes too easy and loses its appeal. Conversely, if the AI is too strong, it can frustrate
players and discourage them from continuing.
Learning curve for players: Introducing AI concepts may make the game more challenging for
some players. Understanding and adapting to the behaviors and strategies of AI-controlled
enemies may require a higher skill level, potentially alienating casual or less experienced
players.
Overall, while implementing AI concepts in a Simple Space war game can enhance gameplay
and create a more dynamic experience, it also introduces additional challenges and complexities
that need to be carefully addressed during development.
Improved Enemy Behavior: Expand the repertoire of enemy behaviors to make them more
diverse and challenging. Implement more complex rule-based systems, state machines, or even
machine learning algorithms to create enemy spaceships that exhibit strategic decision-making,
cooperative tactics, or learn from player patterns.
Adaptive Reactive AI: Enhance the reactive AI of enemy spaceships by making it adaptive to
changing game situations. Instead of relying solely on predefined rules, allow the enemy
spaceships to learn and adjust their reactive behaviors based on the player's actions,
environmental factors, and evolving game state. This will make the enemies more responsive and
unpredictable.
Player-Adaptive AI: Implement AI that adapts to the player's style and skill level. The game
could analyze the player's performance and adjust the behavior of enemy spaceships accordingly.
For example, if the player is struggling, the AI could provide temporary assistance or introduce
easier enemy behaviors. Conversely, if the player is performing well, the AI could ramp up the
challenge for a more rewarding experience.
Expanded Game Modes and Challenges: Introduce additional game modes or challenges that
leverage the AI concepts. For example, you could create boss battles where the enemy
spaceships employ advanced decision-making and behaviors. You could also design levels or
scenarios that require the player to solve puzzles or overcome strategic challenges, adding depth
and variety to the gameplay.
By incorporating these AI concepts, the space war game offers players a challenging and
engaging experience. The intelligent behaviors of the agents, accurate collision detection,
dynamic decision- making, diverse enemy behaviors, and reactive AI contribute to a gameplay
environment that feels realistic and requires strategic thinking and skill from the player