Assignment AI Society E18CSE173

You might also like

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 4

ECSE487L: AI and Society

Assignment 1

Enrollnment No: E18CSE173


Name: Shubh Patel
Batch: EB05

Q1) Why did game playing become the central focus in AI research?

Playing games has always been a matter of interest for children as well as adults. If you would have ever played a
video game on your device, you would have come across AI (Artificial Intelligence). Regardless of whether you
played strategy games, first person shooting games, racing games or even chess, you will always find opponents
controlled by artificial intelligence. The opponents or the characters you play against or with sometimes which are
computers often have AI involved. Game playing has become a central focus of AI to better understand the dynamics
of game and evaluate whether AI could really compete with human or even perform better than them.

One motive for training AI to play games is that, in contrast to real life, games can be quantified. They provide a
means of measuring AI's development and capability. When you play a game, you obtain numerical scores or a
countable total of victories and defeats. For instance, the 1997 chess victory over Kasparov was anticipated by
programmes taking on amateur players in the 1950s and only defeating master players in the 1980s. The
advancement of artificial intelligence could be tracked because chess enabled us to compare it against human
intellect in one area.

Another argument is that games give a safe environment in which we can train and grow.
Games serve as a means of simulating critical challenges for artificial intelligence to solve. Poker involves the use of
artificial intelligence to deal with insufficient information. Many video games generate settings in which artificial
intelligence must make in-game judgments in real time.

Q2) Discuss any two techniques which can help in explaining or interpreting an
AI model.

The goal of model interpretation is to better comprehend model decision setting policy. This is to allow justice,
accountability, and openness, which will give people enough trust to apply these models to real-world situations with
significant implications for society and business.

The two techniques which are often used are:

Exploratory Visualization and Analysis


The concept of exploratory analysis is not totally novel. Data visualisation has been one of the most successful
techniques for extracting underlying insights from data for many years. Some of these approaches can assist us in
finding significant characteristics and relevant representations in our data, which might provide an insight of what
could be beneficial for a model to decide in a human-interpretable format.

Because we frequently deal with a vast feature space (curse of dimensionality), dimensionality reduction techniques
are particularly beneficial here. Lowering the feature space lets us see what could be affecting a model to make
certain judgments.
The following are some of these approaches:
Reduced Dimensionality: Latent Semantic Indexing (LSI), Principal Component Analysis (PCA), and Self-Organizing
Maps (SOM).

T Distributed Stochastic Neighbor Embedding in Manifold Learning (t-SNE)

Variational autoencoders: A variational autoencoder-based automated generating method (VAE)

Clustering Methods: Hierarchical Clustering

Model Performance Evaluation Metric

The evaluation of model performance is a critical stage in any data science lifecycle for selecting the optimal model.
This allows us to assess how well our algorithms are doing, compare different performance measures across models,
and select the top models. This also allows us to tweak and improve hyper-parameters in a model to achieve the
greatest performance on the data we're working with. Depending on the type of challenge, there are usually specific
standard assessment measures in place.

Supervised Learning Regression: We can utilise common metrics like the coefficient of determination (R-square), the
root mean square error (RMSE), and the mean absolute error (MAE) to solve regression issues (MAE).

Unsupervised Learning Clustering: We can utilise measures such as the silhouette coefficient, homogeneity,
completeness and the Calinski Harabaz index to solve unsupervised learning issues based on clustering.

Q3) What is AI winter? Discuss in brief any two reasons that could be a cause for AI winter?

The “AI winter” was a period of low interest in artificial intelligence that lasted from the 1980s to the 2000s. A lack of
interest resulted in a lack of financing.

The phrase was derived from the imagined idea of "nuclear winter," in which the fallout from a nuclear conflict would
alter weather patterns.

The "winter" has been held responsible on a variety of factors, including overhyping of AI research, the inter -
disciplinary nature of AI and conflicts between academic departments, university budget cuts, a lack of actual
applications for AI research, and cheaper computing products outperforming expensive Lisp machines in
performance.

The failures of “expert systems,” systems that were supposed to have decision-making skills equivalent to a human
expert, triggered the AI winter of the 1980s. These systems were popular in the 1980s, but they were costly and
unreliable. As a result, many people believe that AI research has been overhyped.

Another significant outcome of the AI cold was the demise of Lisp computers. These computers were high-priced
workstations designed to handle the Lisp programming language, which was the primary language for AI research at
the time. They also served as the primary platform for expert systems. By the late 1980s, cheaper general-purpose
computers based on the x86, Motorola 68000, or SPARC processors outperformed Lisp machines in terms of
performance.

These changes resulted in a sharp decline in interest in AI research from the late 1980s to the 1990s. As funding
became more difficult to come by, researchers began to refer to their activities under different titles. One long-
standing effect of the AI winter is the present emphasis on “machine learning” inside AI.
In the 2000s, there was a surge in interest in AI research. The same that destroyed Lisp machines appears to have
launched AI's resurgence: advances in computer performance. This resulted in new AI applications and methods.

Q4)Explain in detail how AI is exactly being used for the following applications:
(i) self driving cars
(ii) predicting adverse drug reactions

(i)

One of the most important applications of artificial intelligence is autonomous driving (AI). Autonomous vehicles (AV)
are outfitted with a variety of sensors, including cameras, radars, and lidar, to aid in their understanding of their
environment and course planning. These sensors create enormous amounts of data. AVs require supercomputer-like,
near-instant processing capabilities to make sense of the data provided by these sensors. Companies building AV
systems rely significantly on AI, namely machine learning and deep learning, to effectively analyse massive amounts
of data and train and test their autonomous driving systems.

Deep learning has aided firms in accelerating AV development initiatives in recent years. Deep neural networks
‘DNN’ are increasingly being used by these firms to handle sensor data more efficiently. DNNs enable AVs to learn
how to traverse the environment on their own using sensor data, rather than manually creating a set of rules for an
AV to follow, such as "stop if you see red." These algorithms are inspired by the human brain, which implies that they
learn via experienceAccording to NVIDIA, a deep learning expert, if a DNN is shown photos of a stop sign in different
situations, it. can learn to detect stop signs on its own. Companies building AVs, on the other hand, must write not
just one, but a complete collection of DNNs, each dedicated to a distinct job, in order to achieve safe autonomous
driving. There is no predetermined number of DNNs necessary for autonomous driving; in fact, the list keeps
expanding as new capabilities emerge. To actually drive the automobile, the signals generated by each DNN must be
processed in real time, which is accomplished using high-performance computer systems.

(II)
Adverse drug reactions account for $7.2 billion in medical expenditures in the United States alone each year. Aside
from the economic consequences, adverse medication responses are one of the leading causes of mortality in
affluent countries. The ability to predict adverse drug reaction outcomes or identify patients who are likely to
experience adverse drug reactions after ingesting a medicinal drug product can reduce the number of fatalities and
financial burden associated with medical treatments for patients experiencing adverse drug reactions.To far, two
research have attempted to predict bad medication responses using machine learning algorithms and data from the
Food and Drug Administration. One of them, however, did not give adequate accuracy ratings, while the other was a
small-scale proof of concept research. This study expanded on the findings of the previous two studies in order to
enhance prediction outcomes and give more specific adverse drug response predictor labels. To forecast the
consequences of hazardous medication reactions, two deep, fully linked Neural Networks were built. The Binary
Model and Model-7. Model-7 was a multi-label classification system that was trained to identify individuals who were
hospitalised, died, or were hospitalised and later died. The binary model was trained to categorise adverse drug
response outcomes into two categories: hospitalisation and death. The multiclass Model-7 outperformed typical
machine learning (binary) models produced in earlier research (74 percent accuracy). However, it did not outperform
the binary proof of concept Artificial Neural Network model developed by other researchers.The binary Artificial
Neural Network model developed in this study surpassed previous binary standard machine learning models by
obtaining 83 percent accuracy - 8 percent higher than rivals. Nonetheless, it did not outperform the small-scale
Artificial Neural Network model, which achieved 95% accuracy. The findings of this study show that, given enough
processing capacity, Artificial Neural Networks are better suited for binary classification issues on large scale uneven
data than traditional machine learning models.

You might also like