Download as pdf or txt
Download as pdf or txt
You are on page 1of 22

Interview Questions on Autoencoders 19/04/23, 10:52 PM

ABOUT TRACK YOUR PROGRESS DEEP LEARNING PROJECTS PYTHON PROJECTS !

Interesting Posts

Top Competitive

Programmers

Unsolved

Problems in

Algorithms

Top patent

Interview Questions on Autoencode


Machine Learning (ML) List of Interview Questions

 2022         Internship atOpenGenus Taking


 2023         Internship at FAANG
 2024         Full-time High Paying Job Remote Interns
will change the wh
Apply now:  equation of your ca
internship.opengenus.org

Get this book -> Problems on Array: For Interviews and Compe
Programming

Search &
— Expertise
OpenGenus IQ: Computing ...Legacy ! Share this

https://iq.opengenus.org/interview-questions-on-autoencoders/ Page 1 of 22
Interview Questions on Autoencoders 19/04/23, 10:52 PM

Machine Learning In store In store

(ML) Tutorial

Fundamentals of Naive

Bayes Algorithm
In store

Predicting employee

attrition [Data Mining

Project]

Upto 75% Off on making charges


Fairness in Machine BlueStone

Learning
In this article, we have presented most important
12 benefits of using Interview Questions on Autoencoders.
Machine Learning in

healthcare
Multiple Choice Questions
Decision Region 1. How many layers are there in Autoencoder?

Multi-output learning
1. 2
and Multi-output CNN

models 2. 3

Unpooling operations in 3. 4
ML models 4. 5
Gradient Boosting Ans: 3
Machines (GBM) An autoencoder consists of three layers:

1. Encoder

2. Code

3. Decoder

A feed forwarding mesh is created by fully integrating the encod


and decoder; the code functions as a single layer with independ
dimensions. The number of nodes in the core layer is a
https://iq.opengenus.org/interview-questions-on-autoencoders/ Page 2 of 22
Interview Questions on Autoencoders 19/04/23, 10:52 PM

hyperparameter that must be set in order to create an autoenco


The output network of the decoder is a more precise mirror cop
the input encoder. The coding layer is the only source from whic
the decoder draws the desired output.

2. Select the correct option.


A. Supervised learning methods include autoencoders.
B. The output and input of the autoencoder are identical.

1. Both the statements are TRUE.

2. Statement A is TRUE, but statement B is FALSE.

3. Statement A is FALSE, but statement B is TRUE.

4. Both the statements are FALSE.

Ans: Both the statements are FALSE.


Simple input of the raw input data is all that is required to train a
autoencoder. Since they don't require specific labels to train on,
autoencoders are thought of as an unsupervised learning techn
The fact that they create their own labels from the training data,
however, makes them self-supervised.
The autoencoder will produce an output that is near to the input
not an exact replica of it. They are not the best option if lossless
compression is what you seek.

3. Select the correct option about Denoising autoencoders


A. The loss is between the original input and the reconstru
from a noisy version of the input.
B. Denoising autoencoders can be used as a tool for featur
extraction.

1. Both the statements are TRUE.

https://iq.opengenus.org/interview-questions-on-autoencoders/ Page 3 of 22
Interview Questions on Autoencoders 19/04/23, 10:52 PM

2. Statement A is TRUE, but statement B is FALSE.

3. Statement A is FALSE, but statement B is TRUE.

4. Both the statements are FALSE

Ans: Both the statements are TRUE


The denoising encourages the encoder to retain crucial input da
while discarding irrelevant data. The hidden representation can
be seen as preserving relevant input features.

4. Select the correct option about Sparse autoencoders.


A. Sparse autoencoders introduces information bottleneck
reducing the number of nodes
at hidden layers.
B. The idea is to encourage network to learn an encoding a
decoding which only relies on
activating a small number of neurons.

1. Both the statements are TRUE.

2. Statement A is TRUE, but statement B is FALSE.

3. Statement A is FALSE, but statement B is TRUE.

4. Both the statements are FALSE.

Ans: Statement A is FALSE, but statement B is TRUE.


Sparse autoencoders cause a bottleneck in the flow of informat
without reducing the number of nodes in the buried layers. It
stimulates the network to develop encoding and decoding
techniques that only require a minimal number of neurons to be
activated. We typically regularize a network's weights, not its
activations, so it's interesting that this is a distinct technique.

5. Autoencoders are capable of learning nonlinear manifold


https://iq.opengenus.org/interview-questions-on-autoencoders/ Page 4 of 22
Interview Questions on Autoencoders 19/04/23, 10:52 PM

5. Autoencoders are capable of learning nonlinear manifold


continuous, non-intersecting surface.)

1. TRUE

2. FALSE

Ans: TRUE
Manifold learning is an approach in machine learning that assum
that data lies on a manifold of a much lower dimension. These
manifolds can be linear or non-linear. Thus, the area tries to pro
the data from high-dimension space to a low dimension. For
example, principle component analysis (PCA) is an example of
linear manifold learning whereas an autoencoder is a non-linea
dimensionality reduction (NDR) with the ability to learn non-line
manifolds in low dimensions.

6. Autoencoders are trained using _.

1. Feed Forward

2. Reconstruction

3. Back Propagation

4. They do not require Training

Ans: Back Propagation


A popular algorithm for training feedforward neural networks is
backpropagation. Instead of crudely computing the gradient wit
respect to each individual weight, it efficiently computes the gra
of the loss function with respect to the network weights. Gradien
methods, including variations like gradient descent or stochastic
gradient descent, are frequently used to train multi-layer networ
and update weights to reduce loss due to this efficiency.

https://iq.opengenus.org/interview-questions-on-autoencoders/ Page 5 of 22
Interview Questions on Autoencoders 19/04/23, 10:52 PM

In order to avoid duplicating computation of intermediate terms


the chain rule, the backpropagation method calculates the grad
of the loss function with respect to each weight using the chain
layer by layer, and iterating backward from the last layer.
A machine learning algorithm known as an artificial neural netw
(ANN), which uses backpropagation and sets the target values
equal the input values, is used in an autoencoder.

7. De-noising and Contractive are examples of .

1. Shallow Neural Networks

2. Autoencoders

3. Convolution Neural Networks

4. Recurrent Neural Networks

Ans: Autoencoders
Autoencoders include de-noising and contractive. A de-noising
autoencoder can recreate data from a damaged input signal. Th
removal of some elements of the original data is an example of
corruption. An encoder’s output is usually a refined version of th
original input. An encoder’s output is usually a refined version o
original input. An unsupervised learning approach used to train
networks is a contractive autoencoder.

8. Autoencoders cannot be used for Dimensionality Reduc


Select the correct answer from below given options

1. True

2. False

Ans: False
https://iq.opengenus.org/interview-questions-on-autoencoders/ Page 6 of 22
Interview Questions on Autoencoders 19/04/23, 10:52 PM

The hidden layer has less dimensions than the input and outpu
therefore it contains compressed information from the input laye
which is why it functions as a dimension reduction for the origin
input.

9. Autoencoders are trained without supervision.

1. True

2. False

Ans: True
A data compression approach called "autoencoding" uses sepa
functions for compression and decompression.
a) lossy
(b) Data-specific
c) automatically picked up knowledge from examples rather tha
being created by humans. As a result, Autoencoder are trained
without supervision. Additionally, neural networks are employed
accomplish the compression and decompression operations in
practically all settings where the word "autoencoder" is used.
It would be incredibly challenging to compress the input feature
and then reconstruct them if they were all independent of one
another.

Short Questions

1. Define Autoencoders.
A machine learning algorithm known as an artificial neural netw
(ANN), which uses backpropagation and sets the target values
equal the input values, is used in an autoencoder. In order to
recreate the actual input, it is constructed in a way that can do b
data encoding and data decoding tasks.

https://iq.opengenus.org/interview-questions-on-autoencoders/ Page 7 of 22
Interview Questions on Autoencoders 19/04/23, 10:52 PM

2. How Do Autoencoders Function?


It utilizes the following elements to do the aforementioned tasks

1. Encoder: The encoder layer compresses the input image in


a smaller representation. The original image's distortion is
clearly seen in the compressed version.

2. Code: This portion of the network only represents the


decoder's compressed input.

3. Decoder: Using a lossy reconstruction and the latent space


representation, this decoder layer restores the encoded
image to its original dimension.

3. What are the Uses of Autoencoders?


Today's world of images requires the usage of autoencoders for
variety of purposes. These are some of the uses for them:

1. Data compression

2. Dimensionality reduction

3. Image denoising

4. Feature extraction

5. Removing watermarks from Images

4. Give Two Actual Case Studies Where Autoencoders Hav


Been Used.
Image coloring: Any black and white images are transformed in
colored images by autoencoders. As a result, the color can be
determined based on the subject of the image.
Feature variation: In this case, noise or unneeded interruptions
eliminated and just the relevant features of a picture are retrieve

https://iq.opengenus.org/interview-questions-on-autoencoders/ Page 8 of 22
Interview Questions on Autoencoders 19/04/23, 10:52 PM

and employed to produce the result.

5. Describe the sparse constraint.


A sparse constraint is one that appears in the loss function of a
sparse encoder. When we set the many nodes in the hidden lay
the sparse constraint ensures that the autoencoder is not overfi
the training data.

In store In store

BlueStone - #1 Online Jeweller


BlueStone

6. What is a Bottleneck, and Why is it Used?


The layer between the encoder and the decoder is a bottleneck
Choosing which features of observed data are important and w
can be ignored is a well-designed strategy.
It accomplishes this by striking a balance between two factors:

1. Compressibility is a measurement of representational


compactness.

2. Some behaviorally important input variables are retained.

7. Name some of the Autoencoder Variations?


Some of the Autoencoder Variations are as follows:

1. Convolutional Autoencoders
https://iq.opengenus.org/interview-questions-on-autoencoders/ Page 9 of 22
Interview Questions on Autoencoders 19/04/23, 10:52 PM

2. Sparse Autoencoders

3. Deep Autoencoders

4. Contractive Autoencoders

8. What distinguishes GANs from auto-encoders?


Both the encoding network and the decoding network are
simultaneously learned by an autoencoder.The encoder attemp
reduce the input dimensions to a severely compressed encoded
form when an input (such as an image) is provided.The decode
then fed this. The loss measure rises with the difference betwee
the input and output image, therefore the neural network learns
encoding/decoding. After each iteration, the encoder becomes
little bit more adept at locating an effective compressed version
the input data. Additionally, the decoder becomes slightly more
adept at reassembling the input from the encoded form.
A generator in generative adversarial networks (GANs) turns a
signal into a target space (for example, with images). The
discriminator, on the other hand, separates the genuine images
were taken from the desired target space from the false images
the generator produced, making it the other component (the
adversary). In order to train the network, two phases are alterna
each with a different loss.

9. What Distinguishes Variational Autoencoders From Othe


Autoencoders?
In contrast to other forms of autoencoders, variational autoenco
are generative models. Variational autoencoders are frequently
utilized in generative tasks because they, like GANs, learn the
distribution of the training set.

10. What is the Difference Between an Autoencoder and PC


https://iq.opengenus.org/interview-questions-on-autoencoders/ Page 10 of 22
Interview Questions on Autoencoders 19/04/23, 10:52 PM

Terms of Dimensionality Reduction?

The following are some ways that it varies from PCA (Principal
Component Analysis):
With numerous layers and a non-linear activation function, an
autoencoder can learn non-linear transformations.
Convolutional layers can be used to learn instead of thick layers
which turns out to be more effective for video, image, and serie
data.
Additionally, learning multiple neural network layers using an
autoencoder is more effective than learning a single, massive
change with PCA.
It can use transfer learning to improve the encoder/decoder by
pre-trained layers from another model.

11. What is the need for contractive autoencoders?


We employ contractive autoencoders to make sure that our
encodings are more resistant to minor perturbations found in th
training set.
The representations that are too sensitive to the input are pena
by the contractive autoencoders' introduction of a new penalty t
in the loss function.

12. Can you use Batch Normalisation in Sparse Auto-encod


There is a research that suggests a novel detector using a batc
normalization masked assessment model to increase the precis
of the grasping detection.
It is built utilizing a two-layer sparse autoencoder, and the seco
layer of the model incorporates a Batch Normalization-based m
to efficiently decrease the weakly correlated features.
The more different features that are extracted from such a mod
ensure that the grasping detection will be more accurate.

13. Describe how the convolutional autoencoders' encoder


https://iq.opengenus.org/interview-questions-on-autoencoders/ Page 11 of 22
Interview Questions on Autoencoders 19/04/23, 10:52 PM

13. Describe how the convolutional autoencoders' encoder


decoder functions.
We send the input image to the convolutional layer-based enco
Convolution is carried out by the convolutional layer, which also
extracts significant features from the image.
Then, using the max pooling technique to keep only the most cr
aspects of the image, we produce a latent image representation
known as a bottleneck.
We provide the bottleneck as an input to the decoder.
The decoder performs the deconvolution operation and attempt
rebuild the image from the bottleneck using deconvolutional lay

14. Difference between overcomplete and undercomplete


autoencoders
The autoencoder is referred to as an overcomplete autoencode
when the dimension of the code or latent representation is grea
than the dimension of the input. On the other hand, the autoenc
is known as an undercomplete autoencoder when the dimensio
the code or latent representation is smaller than the dimension
the input.

15. How can you evaluate the performance of an autoencod


Because autoencoders are data-specific, they can only compre
data that is identical to the data they were trained on. The
effectiveness of the method might therefore be assessed based
the usefulness of traits that have been learned via hidden layer
This is why, in my opinion, cutting the output of the intermediate
hidden layer and comparing the accuracy/performance of your
chosen algorithm using this reduced data rather than the origina
data is a good way to assess an autoencoder's effectiveness in
dimensionality reduction.

16. How are autoencoders used for image denoising?

https://iq.opengenus.org/interview-questions-on-autoencoders/ Page 12 of 22
Interview Questions on Autoencoders 19/04/23, 10:52 PM

The autoencoders for denoising images can be used. In order t


provide the encoder with corrupted input rather than raw input,
first add some noise to the input to corrupt it.
The encoder will discover that the noise is undesired informatio
it learns the representation of the input and will delete its
representation. In order to transfer the learnt representation to t
bottleneck, the encoder learns a compact representation of the
that excludes noise and retains only the information that is
necessary.
The decoder then reconstructs the image using the bottleneck
caused by the input. The decoder can produce a denoised imag
from the bottleneck since there is no representation of the noise
the bottleneck.

17. Explain how autoencoders can be used for anamoly


detection?
To find anomalies in a high-dimension dataset, follow these step
This also works with unbalanced datasets.
Don't enter any unusual transactions into the encoder during th
training. The latent representation of the typical input data will b
taught to the bottleneck layer.
The Decoder will rebuild the typical transactions of the initial inp
data using the output from the bottleneck layers.
The two types of transactions will be dissimilar in a fraudulent
transaction. The fraudulent transaction will be difficult for the
Autoencoder to reconstruct, which will cause a significant level
reconstruction error.
On the basis of a chosen threshold value for the reconstruction
error, you can flag a new transaction as fraudulent.

18. How to reverse PCA and reconstruct original variables


several principal components?
PCA calculates the covariance matrix's eigenvectors ("principal
https://iq.opengenus.org/interview-questions-on-autoencoders/ Page 13 of 22
Interview Questions on Autoencoders 19/04/23, 10:52 PM

PCA calculates the covariance matrix's eigenvectors ("principal


axes") and ranks them according to their eigenvalues (amount o
explained variance). The principal components can then be
produced by projecting the centered data onto these principal a
("scores"). One can only maintain a portion of the principle
components for dimensionality reduction and throw away the re

19. Why use a autoencoder for dimensionality reduction?


Take into account a feed-forward fully linked auto-encoder with
layer, 1 hidden layer with k units, 1 output layer, and all linear
activation functions.
The latent space of this auto-encoder spans the first k fundame
elements of the original data. If you wish to represent the input
fewer features but aren't particularly concerned with the
orthogonality restriction in PCA.

However, auto-encoders permit a variety of modifications on thi


fundamental concept, providing you more options than PCA for
the latent space should be built. It is obvious that using CNN lay
in place of FFNs results in a different kind of model than PCA, a
as a result, it will encode various kinds of information in the late
space. Another alternative to PCA's latent encoding is to use
nonlinear activation functions (because PCA is linear). Sparse,
contractive, and variational auto-encoders also have different
objectives than PCA and will provide different outcomes, which
be useful depending on the issue you're trying to resolve.

20) What are the important hyper parameters that need to s


before training autoencoder?
Before training an autoencoder, we must establish the following
hyperparameters:
Code size: Number of middle layer nodes is a measure of code
More compression is achieved with smaller dimensions.
https://iq.opengenus.org/interview-questions-on-autoencoders/ Page 14 of 22
Interview Questions on Autoencoders 19/04/23, 10:52 PM

Number of layers: We are free to choose the depth of the


autoencoder.
Without taking into account the input and output, the encoder a
decoder in the aforementioned diagram both have two layers.
Nodes per layer: Because the layers are stacked one on top of
other, the autoencoder design we're working on is known as a
stacked autoencoder. Autoencoder stacks frequently resemble
switches. Less nodes per layer in the encoder with each additio
layer leads to more nodes per layer in the decoder. In terms of
structure, the decoder and the encoder are also symmetric. Sin
we have complete control over these factors, as was previously
mentioned, this is not essential.
Loss function: We have two options for the loss function: binary
crossentropy or mean squared error (mse). Crossentropy is
commonly used if the input values fall within the [0, 1] range;
otherwise, mean square error is employed.

21. How to reverse max pooling layer in autoencoder to ret


the original shape in decoder?

input = layers.Input(shape=(28, 28, 1))

# Encoder
x = layers.Conv2D(32, (3, 3), activation="relu", padding
x = layers.MaxPooling2D((2, 2), padding="same")(x)
x = layers.Conv2D(32, (3, 3), activation="relu", padding
x = layers.MaxPooling2D((2, 2), padding="same")(x)

# Decoder
x = layers.Conv2DTranspose(32, (3, 3), strides=2, activati
x = layers.Conv2DTranspose(32, (3, 3), strides=2, activati
x = layers.Conv2D(1, (3, 3), activation="sigmoid", padding

# Autoencoder
autoencoder = Model(input, x)

autoencoder.compile(optimizer="adam", loss="binary_crossen
https://iq.opengenus.org/interview-questions-on-autoencoders/ Page 15 of 22
Interview Questions on Autoencoders 19/04/23, 10:52 PM

autoencoder.compile(optimizer="adam", loss="binary_crossen
autoencoder.summary()

A 2D tensor can be upscaled in a variety of ways, or it can be


projected from a smaller vector into a bigger one.

Here's a non exhaustive list:

1. Apply a single or a few upsampling layers, then a flatten


layer, then a linear layer.To expand the size of your image,
upsampling essentially uses common image upscaling
algorithms.It should then be flattened so that a linear layer
may be added to it, giving you the precise form you need.

2. Apply a flatten, then a projection layer, skipping the upscale


entirely.
This will do for MNIST. Use the previously mentioned advice
interspersed with convolutional blocks, for more complicated
datasets to boost your models' capacity and reconstruction
abilities.

You have already tried the UpSampling + Conv route. Applying


flatten layer, a projection layer with 768 output units, and then
reshaping into batch, 28, 28, and 1 once more will provide you
results you require.

22. Why do we need Denoising?

# Since we only need images from the dataset to encode and


# won't use the labels.
(train_data, _), (test_data, _) = mnist.load_data()

# Normalize and reshape the data


https://iq.opengenus.org/interview-questions-on-autoencoders/ Page 16 of 22
Interview Questions on Autoencoders 19/04/23, 10:52 PM

# Normalize and reshape the data


train_data = preprocess(train_data)
test_data = preprocess(test_data)

# Create a copy of the data with added noise


noisy_train_data = noise(train_data)
noisy_test_data = noise(test_data)

# Display the train data and a version of it with added no


display(train_data, noisy_train_data)

Data is distorted in some way during the denoising process by


adding random noise, and the model is trained to anticipate the
original, uncorrupted data.
A different approach to this is to leave out portions of the input r
than introducing noise so that the model can learn to predict the
original image.
The objective here is to save the encoder's output as a feature
vector so that it may be used in a supervised model train-predic
technique.

The use of denoising autoencoders can be aimed at cleaning u


stained scanned images or help with feature selection efforts in
cancer biology. Regarding, the output of an old image encoder
contributes to a model's ability to recover the original image util
strong latent representations produced by the decoder. In terms
cancer biology, the retrieved encoder features aid in the
development of a more accurate cancer diagnosis.

23. How do we add a sparsity constraint on the encoded


representations

from keras import regularizers

encoding_dim = 32
https://iq.opengenus.org/interview-questions-on-autoencoders/ Page 17 of 22
Interview Questions on Autoencoders 19/04/23, 10:52 PM

encoding_dim = 32

input_img = keras.Input(shape=(784,))
# Add a Dense layer with a L1 activity regularizer
encoded = layers.Dense(encoding_dim, activation='relu',
activity_regularizer=regularizers.l1(10e
decoded = layers.Dense(784, activation='sigmoid')(encoded

autoencoder = keras.Model(input_img, decoded)

Less units would "fire" at a given moment if the activity of the hi


representations were subject to a sparsity constraint, which is a
when the representations are constrained to be compact. This c
be accomplished in Keras by including an activity regularizer in
Dense layer:

24.Can you explain when to use use sequence-to-sequence


autoencoder?

timesteps = ... # Length of your sequences


input_dim = ...
latent_dim = ...

inputs = keras.Input(shape=(timesteps, input_dim))


encoded = layers.LSTM(latent_dim)(inputs)

decoded = layers.RepeatVector(timesteps)(encoded)
decoded = layers.LSTM(input_dim, return_sequences=True)(

sequence_autoencoder = keras.Model(inputs, decoded)


encoder = keras.Model(inputs, encoded)

You might wish to employ an encoder and decoder that can cap
temporal structure, such as an LSTM, if your inputs are sequen
rather than vectors or 2D images. In order to create an LSTM-b
autoencoder, you must first use an LSTM encoder to turn your i
https://iq.opengenus.org/interview-questions-on-autoencoders/ Page 18 of 22
Interview Questions on Autoencoders 19/04/23, 10:52 PM

sequences into a single vector that contains details about the e


sequence. Next, you must repeat this vector n times (where n is
number of timesteps in the output sequence), and finally, you m
use an LSTM decoder to convert this constant sequence into th
desired sequence.

25. Why do we use binary cross entropy loss on autoencod

input = layers.Input(shape=(28, 28, 1))

# Encoder
x = layers.Conv2D(32, (3, 3), activation="relu", padding
x = layers.MaxPooling2D((2, 2), padding="same")(x)
x = layers.Conv2D(32, (3, 3), activation="relu", padding
x = layers.MaxPooling2D((2, 2), padding="same")(x)

# Decoder
x = layers.Conv2DTranspose(32, (3, 3), strides=2, activati
x = layers.Conv2DTranspose(32, (3, 3), strides=2, activati
x = layers.Conv2D(1, (3, 3), activation="sigmoid", padding

# Autoencoder
autoencoder = Model(input, x)
autoencoder.compile(optimizer="adam", loss="binary_crossen
autoencoder.summary()

The values of the input data have an impact on the loss function
Binary crossentropy is acceptable as the loss function if the inp
data are limited to values between zero and one (rather than va
outside of these ranges). If not, you must use alternative loss
functions like "mse" (mean squared error) or "mae" (i.e. mean
absolute error). Note that you can use binary crossentropy, as i
often used, for input values in the range [0, 1]. (e.g. Keras

autoencoder). Expect the loss value to remain positive, though,


binary crossentropy does not return zero when neither the pred
https://iq.opengenus.org/interview-questions-on-autoencoders/ Page 19 of 22
Interview Questions on Autoencoders 19/04/23, 10:52 PM

binary crossentropy does not return zero when neither the pred
nor the label are either zero or one (no matter they are equal or

In store In store

In store

Award-Winning Collections
BlueStone

Saroj Mali
Saroj Mali is a Machine Learning Developer, Intern at OpenGenus and Read More

has research interests in Deep Learning.

Vote for Author of this article: "

Improved & Reviewed by:


OpenGenus Foundation

— OpenGenus IQ: Computing Expertise & Legacy —

Machine Learning (ML)

https://iq.opengenus.org/interview-questions-on-autoencoders/ Page 20 of 22
Interview Questions on Autoencoders 19/04/23, 10:52 PM

Fundamentals of Naive Bayes Algorithm

Predicting employee attrition [Data Mining Project]

Fairness in Machine Learning

See all 564 posts →

JAVASCRIPT

Chrome Extensions Interview Questions


In this article, we have presented Chrome Extensions Interview
Questions along with detailed answers.

KIRABO IBRAHIM

NATURAL LANGUAGE PROCESSING (NLP)

NLP Project: Compare Text Summarization Models


In this article, we will go over the basics of Text Summarization, the
different approaches to generating automatic summaries, some of the

https://iq.opengenus.org/interview-questions-on-autoencoders/ Page 21 of 22
Interview Questions on Autoencoders 19/04/23, 10:52 PM

real world applications of Text Summarization, and finally, we will


compare various Text Summarization models with the help of ROUGE.

AGASTYA GUMMARAJU

OpenGenus IQ © 2023 All rights reserved ™ [email: team@opengenus.org]

Top Posts LinkedIn Twitter

https://iq.opengenus.org/interview-questions-on-autoencoders/ Page 22 of 22

You might also like