Professional Documents
Culture Documents
Gen Ai
Gen Ai
Gen Ai
ResNet
BERT
GPT series
CycleGAN
TransformerXL
To transform data
To generate data
To distinguish between real and generated data
To encode data
To decode data
BigGAN
CycleGAN
DCGAN
VAE
Transformer
DCGAN
Transformer
CycleGAN
VAE
BigGAN
Decoding Mechanism
Self-Attention Mechanism
None of the given options
Encoding Mechanism
Recurrent Mechanism
7. What is the main innovation introduced by the "Attention Is All
You Need" paper?
Introduction of RNNs
Introduction of CNNs
Introduction of GANs
Transformer architecture
Introduction of VAEs
Transformer
DCGAN
BigGAN
CycleGAN
VAE
10. Which model is known for its rules for creating stable and
effective AI image-makers?
CycleGAN
VAE
DCGAN
BigGAN
Transformer
K-Means Clustering
Support Vector Machines
Decision Trees
Neural Networks
Quantum Entanglement
Leaky ReLU
Sigmoid
Hyperbolic Tangent (tanh)
Linear
Rectified Linear Unit (ReLU)
Supervised Learning
Recursive Learning
Unsupervised Learning
Reinforcement Learning
Semi-supervised Learning
Recommendation
Ranking
Regression
Classification
Clustering
Bias Function
Loss Function
Activation Function
Weight Function
Linear Function
Hidden Layer
Quantum Layer
Output Layer
Convolutional Layer
Input Layer
Clustering
Supervised Learning
Unsupervised Learning
Regression
Reinforcement Learning
Polynomial Function
ReLU (Rectified Linear Unit)
Linear Function
Bias Activation
Weighted Sum
Regression
Ranking
Clustering
Classification
Anomaly Detection
Quiz: Introduction to Generative Models - Pre Quiz
1. Which statement best differentiates generative from
discriminative models?
Generative model
Bayesian model
Likelihood model
Joint probability model
Discriminative model
Discriminative Model
Both Generative and Discriminative
Bayesian model
Generative Model
Probability Distribution
To optimize algorithms
To classify data
To generate new data samples
To analyze data distributions
To reduce computational cost
8. In the context of models, what does P(x | y) typically represent?
Regression
Classification
Generating new data samples similar to the input data
Reinforcement learning
Clustering
Convolutional layers
Activation functions
Probability distributions and likelihood
Gradient descent
Backpropagation
Introduction to Generative Models - Post Qiuz
1. Which of the following is NOT a property of likelihood?
4. Which model type aims to capture the joint probability P(x, y)?
Discriminative Model
Both Generative Model and Discriminative Model
Generative Model
Probability Distribution
regression model
P(model | data)
P(data)
P(data & model)
P(model)
P(data | model)
Only denoising
Only data generation
Data generation, denoising, inpainting, and more
Classification only
Data labeling only
Hinge loss
Absolute error
Cross-entropy
KL divergence
Mean squared error
Vectorized Autoencoder
Virtual Autoencoder
Variational Autoencoder
Variable Autoencoder
Sparse autoencoder
Supervised autoencoder
Variational autoencoder
Denoising autoencoder
Contractive autoencoder
Speech recognition
Designing virtual fashion items
Text translation
Time series forecasting
Image classification
Accuracy
Cross-entropy only
Precision
KL divergence
Mean squared error only
Text generation
Face generation for video games
Fashion design
Music composition
Anomaly detection in industrial equipment
Backpropagation
Variational inference
Gradient descent
Transfer learning
None of the given options
Genetic algorithms
Simulated annealing
Principal component analysis
Stochastic gradient descent (SGD)
None of the given options
Generation
Classification
Reconstruction
Clustering
Filtering
Latent Space
Anomaly Score
Timestamp
Data Value
Reconstruction Error
80-20
90-10
70-30
60-40
50-50
Validation accuracy
Latent space dimensions
Data input size
Loss
Training time
100
50
10
25
40
Randomness
Recursion
Parallelism
Determinism
Linearity
Disintegration
Reformatting
Shrinking
Continuous adaptation
Manual recalibration
Humidity
Pressure
Timestamp
Temperature
Vibration
14. What is a primary application of VAEs mentioned in the case
study?
Speech Recognition
Object Detection
Text Summarization
Image Classification
Anomaly Detection
Text Dataset
Image Dataset
Tabular Dataset
Audio Dataset
Time Series Dataset
Chess
Sudoku
None of the given options
Poker
Minimax
Discriminator
Encoder
Generator
None of the given options
Decoder
Speech recognition
Image-to-image translation
Text summarization
Time series forecasting
Image classification
Progressive GAN
Conditional GAN
None of the given options
Mode Collapse GAN
Minimax GAN
Optimizer
Generator
Decoder
Encoder
Discriminator
Decoder
Discriminator
Encoder
None of the given options
Generator
Data augmentation
Minimax GAN
Mode Collapse GAN
Conditional GAN
None of the given options
Progressive GAN
tanh
relu
sigmoid
softmax
leakyrelu
Dense
Reshape
Conv2D with strides
BatchNormalization
UpSampling2D
10000
5000
12000
6000
15000
Dropout
All of the given options
Data augmentation
Gradient clipping
Noise addition
Mode Collapse
All of the given options
Data Augmentation
Convergence Issues
Training Instability
7. In the provided code, why is discriminator.trainable set to False
when setting up the combined system?
To speed up training
To prevent overfitting
None of the given options
To make sure only the generator is trained in this step
To increase discriminator's accuracy
10. Which of the following best describes the role of the generator
in a GAN?
LSTM
WGAN
DBN
CNN
RNN
Adam Optimizer
Conv2D
All of the given options
Batch Normalization
LeakyReLU
14. Which of the following is NOT a feedback given to the
generator during training?
15. During training, what does the generator use to improve itself?
CIFAR-10 dataset
Feedback from the discriminator
Real images
Feedback from the user
Feedback from both the user and the discriminator
GRU
LSTM
Bidirectional RNN
CNN
Simple RNN
Next word
None of the options given
Next song note
Next image
Next video frame
Audio
Tabular
None of the options given
Sequential
Image
A frequency band
A specific instrument
A note in the C major scale
A time step in the generated sequence
A possible note or rest in the musical vocabulary
As discriminators in GANs
Encoding the input sequence and decoding the output
sequence
For clustering text data
As a replacement for CNNs
For image classification
L1 regularization
Dropout
Data augmentation
Batch normalization
Gradient clipping
Image generation
Image classification
Object detection
Text generation
Face recognition
Previous output
Current input
None of the given options
Previous error
Current weight
More layers
None of the given options
Handling long-term dependencies
Lower computational cost
Faster computation
They don't
None of the given options
Through padding and truncation
By skipping them
By changing the network size
Speech recognition
None of the given options
Text generation
Image classification
Time series prediction
LSTM is outdated
LSTM can't handle sequences
None of the given options
GRU is simpler and sometimes faster
GRU is always more accurate
GRU
Bidirectional RNN
None of the given options
Simple RNN
LSTM
2. Which layer in the RNN model represents words as detailed feature lists?
LSTM Layer
Dense Layer
SimpleRNN Layer
Dropout Layer
Embedding Layer
Simpler architecture
Lower memory usage
Requires fewer layers
Tackles the vanishing gradient problem
Faster convergence
5. What is the purpose of the Dropout layer in the LSTM with Dropout model?
epochs
loss
optimizer
validation_data
batch_size
Increases accuracy
Reduces training speed
Impedes learning of long-range dependencies
Requires more memory
Makes model evaluation faster
12. In the given LSTM model, which layer(s) help in retaining memory and
context?
Dropout layer
Embedding layer
SimpleRNN layer
LSTM layer
Dense layer
13. When using a tokenizer with a fixed number of words, what could be a potential
drawback?
Regularization
Tokenization
Handling out-of-vocabulary words
Representing words in dense vector format
Reducing sequence length
15. After training, what can be inferred if the validation loss keeps decreasing
but training loss remains high?
Attention Mechanism
LSTM
CNN
None of the options given
RNN
GPT
DALL·E
BERT
T5
Image GPT
Improving regularization
Speeding up training
Reducing model size
Capturing different types of information from the input
None of the options given
Translation
Question Answering
None of the options given
Summarization
Image Classification
Fine-tuning
Pre-training
None of the options given
Initialization
Backpropagation
Sequence alignment
Named entity recognition
Image generation using DALL·E
Text summarization
Speech recognition
Image Classification
Question Answering
None of the options given
Summarization
Translation
3. Which model can be used for both image and text tasks?
T5
GPT
None of the options given
BERT
DALL·E
DALL·E
T5
GPT
None of the options given
BERT
It is faster
It is only used in GPT
It allows the model to focus on multiple parts of the input
simultaneously
It uses fewer parameters
None of the options given
DALL·E
BERT
Image GPT
GPT
T5
Backpropagation
CNN layers
Self-attention
Dropout
LSTM cells
Had no effect
Reduced it drastically
Made it much longer
Made it slightly faster
Increased server costs
4. What technology does GlobeTech plan to integrate with
Transformers for customer support in the future?
Image recognition
Voice recognition
Gesture recognition
Augmented reality
Text summarization
Debugging
Re-analysis
Re-programming
Refactoring
Re-training and fine-tuning
By enhancing graphics
By providing real-time translations of UI elements
By adding more interactive elements
By offering more payment options
By changing the website layout
Contextual Translation
Graphics
Interactivity
Real-time Voice Translations
Speed
0.1
0.5
0.2
0.4
0.3
Google Bard
LaMDA
Bing AI
ChatGPT
DALL-E
LaMDA
Bing AI
ChatGPT
DALL-E
Google Bard
Manufacturing
Healthcare
Cybersecurity
Advertising
Education
ChatGPT
Google Bard
DALL-E
Bing AI
LaMDA
Education
Finance
Advertising
Agriculture
Cybersecurity
Generative AI
Visual AI
Discriminative AI
Binary AI
Transformer AI
Emphasizing creativity
Ethically developed
By accessing the web
Slack integration
Using "red team" prompts for safety
Audio quality
Video resolution
Efficient basic content generation
Scripted prompts
Language integrations
5. What is a potential concern when using Code Whisperer by
AWS with open-source projects?
0.05
0.02
0.03
0.04
0.01