Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 10

Q. 1. What is Artificial Intelligence? State typical AI problems.

Explain approaches,
advantages and limitations of AI

Artificial Intelligence (AI) is when machines are programmed to think and learn like humans to do tasks
that usually require human intelligence. There are different areas of AI:

1. Machine Learning: Machines learn from big sets of data to make predictions or decisions, like
recognizing images, understanding speech, or suggesting things.

2. Natural Language Processing (NLP): Machines learn to understand and work with human language,
such as translating languages, analyzing sentiments, or answering questions.

3. Computer Vision: Machines learn to see and understand images or videos, like recognizing objects,
faces, or dividing images into parts.

4. Robotics: AI is used to make robots smart, so they can move around, manipulate things, and interact
with people.

11. Approaches to AI:


i. Strong AI: Strong Al aims to build machines that can truly reason and solve problems. Strong
Al maintains that suitably programmed machines are capable of cognitive mental states.
ii. Weak AI: These are computer-based artificial intelligence that cannot truly reason and solve
problems, but can act as if it were intelligent. Weak Al holds that suitably programmed machines
can simulate human cognition.
iii. Applied AI: Aims to produce commercially viable "smart" systems such as, for example, a
security system that is able to recognize the faces of people who are permitted to enter a
premises.
iv. Cognitive AI: Computers are used to test theories about how human mind works. Solving
such problems in an abstract level is cognitive approach.

AI Advantages in simple words:

1. Automation: AI automates boring and repetitive tasks, making things faster and more efficient.

2. Decision Making: AI can analyze a lot of data and make smart decisions that humans might miss.

3. Accuracy: AI algorithms are really good at tasks like recognizing images or analyzing data, making
fewer mistakes.

4. Handling Complexity: AI can solve difficult problems with lots of factors, giving solutions that humans
might struggle with.

AI Limitations :

1. Common Sense: AI may not understand things that humans find obvious.
2. Data Dependency: AI needs good and enough data to work well. Biased or not enough data
can lead to mistakes.
3. Ethical Concerns: AI raises questions about privacy, fairness, and job loss, so we need rules
to use it responsibly.
4. Limited Creativity: AI is not great at tasks that need imagination, new ideas, or
understanding emotions.

Additionally, AI may not be good at tasks requiring creativity, innovation, or emotional understanding.

It's important to remember that AI is always evolving, so new ideas, methods, and limitations may come
up over time.

Q2} Soft Computing vs Hard Computing:

In simple words, Soft Computing is flexible, approximate, and deals with uncertain or fuzzy information,
while Hard Computing is precise, deterministic, and works with exact models and data.

Q. 3. What is Artificial Neural Network? With neat sketch, explain different terms used in
ANN with characteristics

Artificial Neural Network (ANN) is a computational model inspired by the human brain.

Artificial Neural Network (ANN) is a computational model inspired by the structure and
functioning of the human brain. It consists of interconnected artificial neurons that process and
transmit information through weighted connections.

Key Terms in ANN:

1. Neuron: Basic unit of an ANN that processes and transmits information.


2. Input Layer: First layer that receives input data.

3. Hidden Layer: Intermediate layers that perform computations and extract features.

4. Output Layer: Final layer that produces the network's output.

5. Weights: Strength of connections between neurons, determining their importance.

6. Bias: Constant term added to weighted inputs to allow learning.

7. Activation Function: Non-linear function applied to weighted inputs, producing neuron output.

8. Forward Propagation: Transmitting input data through the network to produce an output.

9. Backpropagation: Updating weights by propagating error from output to input layer.

10. Training: Adjusting weights using a training dataset to minimize prediction errors.

Characteristics of ANN:

1. Adaptability: ANNs can learn and improve through training.

2. Parallel Processing: Neurons in the network process information simultaneously.

3. Non-linearity: Activation functions introduce non-linear behavior.

4. Fault Tolerance: ANNs can function even with damaged neurons or connections.

5. Generalization: ANNs can make predictions on unseen data based on training examples.

6. Distributed Knowledge: Knowledge is distributed across the network.


Q4 Explain the basic terminology associated with neuron networks [ input, wights , net input
function activation function , list of activation functions error Propagation]

Basic Terminology Associated with Neural Networks:

1. Neuron: Basic unit of a neural network that processes information.


2. Inputs: Values or data provided to a neuron for processing.
3. Weights: Strength or importance assigned to each input.
4. Net Input Function: Calculates the weighted sum of inputs.
5. Activation Function: Determines the output of a neuron based on the net input.
- Sigmoid: Outputs values between 0 and 1, commonly used for binary classification.
- ReLU: Outputs 0 for negative input, passes positive input unchanged.
- Tanh: Outputs values between -1 and 1, similar to sigmoid but symmetric.
- Softmax: Generates a probability distribution over multiple classes.
6. Error Propagation: Calculating and propagating the difference between predicted and desired
output back through the network.
- Adjusts weights to minimize the error and improve network performance.

Understanding these terms is crucial for comprehending the functioning and training of neural
networks.
Q5) Compare biological neurons with ANN
Difference between Biological Neuron Network and Artificial Neural Network:
Biological Neuron Network:
1.Soma or cell body: where cell nucleus is located
2. Dendrites: where the nerve is connected to the cell body
3. Axon: which carries the impulses of the neuron

1. Nature: Found in living organisms, including humans and animals.


2. Structure: Consists of a soma (cell body), dendrites, and an axon.
3. Communication: Electric impulses and chemical processes occur between synapses and
dendrites.
4. Learning: Biological neurons have the ability to adapt and learn from experiences.
5. Complexity: Biological networks are highly complex and interconnected.
6. Limitations: Subject to biological constraints and limitations.

7.
Artificial Neural Network (ANN):
1. Creation: Designed and implemented by humans using computer systems.
2. Structure: Composed of artificial neurons, layers, and connections.
3. Communication: Computation is performed through mathematical operations and weighted
connections.
4. Learning: ANNs can learn and improve through training algorithms.
5. Simplicity: ANNs are simplified models inspired by biological networks but lack the
complexity of the human brain.
6. Scalability: ANNs can be easily scaled and adapted for various applications.
In simple words, biological neuron networks exist in living organisms and rely on electrical and
chemical processes for communication, whereas artificial neural networks are created by
humans using computers and use mathematical operations and weighted connections for
computation. Biological networks are complex and adaptive, while artificial networks are
simplified models that can be scaled and trained for specific tasks.
Q. 6. Explain various learning techniques used in Artificial Neural Network.

1. ANN's main property is its ability to learn, i.e., adapt itself based on input.
2. Learning or training is the process where the neural network adjusts its parameters to
generate the desired response.
3. There are two types of learning in ANNs:
- Parameter learning: Updates the connection weights.
- Structure learning: Focuses on changing the network structure, such as the number of
neurons and their connections.
4. Parameter and structure learning can be performed together or separately.
5. Learning in ANNs can be broadly classified into three categories:
- Supervised learning: Training with labeled input-output pairs.
- Unsupervised learning: Discovering patterns and relationships in unlabeled data.
- Reinforcement learning: Learning through interactions with the environment, maximizing
rewards.

Supervised Learning in Artificial Neural Networks:

1. Learning with a Teacher: In supervised learning, a teacher guides the learning


process, providing correct answers or desired outputs for each input.
2. Input-Output Pairs: In ANN, supervised learning requires input vectors and
corresponding target vectors, which represent the desired outputs.
3. Training Pair: The input vector and target vector together form a training pair.
4. Precise Information: The network is explicitly informed about the expected
output for each input.
5. Training Process: During training, an input vector is presented to the network,
which produces an output vector.
6. Comparison and Error Signal: The actual output vector is compared to the
desired (target) output vector. If there is a difference, an error signal is generated
by the network.

Unsupervised Learning:

1. Learning without a Teacher: Unsupervised learning occurs without


external guidance or a teacher.
2. Pattern Discovery: Input vectors of similar types are grouped together
to identify patterns or categories.
3. Clustering: The network organizes input patterns into clusters during
the training process.
4. Output Response: When a new input pattern is presented, the network
provides an output indicating the corresponding class or group.
5. Self-Discovery: The network discovers patterns, regularities, and
features within the input data without feedback from the environment.
6. Parameter Change: The network's parameters change as it discovers
and learns from the input data.
7. Self-Organizing: The network forms clusters by identifying similarities
and dissimilarities among the input objects.

Reinforcement Learning:
1. Similar to supervised learning, but with less available information.
2. Feedback indicates the correctness of the output, not precise values.
3. Learning is based on evaluative reinforcement signals.
4. The network receives feedback from its environment.
5. Feedback is evaluative, not instructive.
6. Critic signal generator processes external reinforcement signals.
7. Adjusts weights of the ANN for better future feedback.
8. Aims to improve performance based on received reinforcement signals.
Q7) State and explain McCulloch-Pits and Hebb Networks.

McCulloch-Pitts Network:
- Proposed by McCulloch and Pitts in 1943.
- Consists of binary threshold neurons.
- Inputs are binary, and outputs are binary based on a predefined threshold.
- Activation function is a simple threshold function.
- If the weighted sum of inputs exceeds the threshold, the neuron fires and produces an output
of 1; otherwise, it produces an output of 0.
Hebb Network:
- Proposed by Donald Hebb in 1949.
- Focuses on synaptic connections between neurons.
- Uses the Hebbian learning rule: "cells that fire together, wire together."
- Synaptic weights are adjusted based on the simultaneous activation of connected neurons.
- Reinforces connections between neurons that frequently activate together.
- Demonstrates synaptic plasticity and is associated with associative learning.

8. The figure shows excitatory weighted connections from inputs x1 to xn and


inhibitory weighted interconnections from xn-1 to xn+m.

9. The firing of the neuron is determined by a threshold activation function.


10. To achieve absolute inhibition, the threshold of the activation function must
satisfy a specific condition.

11. The neuron will fire if it receives "k" or more excitatory inputs and no inhibitory
inputs.

12. The McCulloch-Pitts neuron does not have a specific training algorithm.
13. Analysis is performed to determine the weights and threshold values for the
neuron.
14. The McCulloch-Pitts neuron is used as a building block to model various
functions or phenomena based on logical operations.

You might also like