Professional Documents
Culture Documents
A Gentle Introduction To RoBERTa - Analytics Vidhya
A Gentle Introduction To RoBERTa - Analytics Vidhya
Become a GenAI Professional: 10+ Projects, 26+ Tools , 75+ Mentorship Sessions
Source: Canva
Introduction
In 2018 Google AI released a self-supervised learning model called BERT for learning language
representations. And then, in 2019, Yinhan Liu et al. (Meta AI) proposed a robustly optimized approach called
RoBERTa (Robustly Optimized BERT-Pretraining Approach) for pretraining natural language processing (NLP)
systems that improve on Bidirectional Encoder Representations from Transformers (BERT).
In this article, we will take a look a look at the RoBERTa in more detail.
Now, let’s jump right in!
Highlights
RoBERTa is a reimplementation of BERT with some modifications to the key hyperparameters and tiny
embedding tweaks, along with a setup for RoBERTa pre-trained models.
In RoBERTa, we don’t need to define which token belongs to which segment or use token_type_ids. With
the aid of the separation token tokenizer.sep_token (or), we can easily divide the segments.
CamemBERT is a wrapper around RoBERTa.
What Prompted the Researchers to Develop a RoBERTa-like Model?
The Facebook AI and the University of Washington researchers found that the BERT model was remarkably
undertrained, and they suggested making several changes to the pretraining process to improve the BERT
model’s performance.
RoBERTa Model Architecture
RoBERTa model shares the same architecture as the BERT model. It is a reimplementation of BERT with
some modifications to the key hyperparameters and minor embedding tweaks.
We use cookies on Analytics Vidhya websites to deliver our services, analyze web traffic, and improve your experience on the site. By using Analytics Vidhya, you agree to our Privacy Policy and Terms of Use.
Accept
https://www.analyticsvidhya.com/blog/2022/10/a-gentle-introduction-to-roberta/ 1/6
1/24/24, 7:18 PM A Gentle Introduction to RoBERTa - Analytics Vidhya
The BERT’s general pre-training and fine-tuning procedures are shown in the diagram below (See Figure 1). In
BERT, Except for the output layers, the same architectures are used in pre-training and fine-tuning. The same
pre-trained model parameters are used to initialize models for different downstream tasks. During fine-
tuning, all parameters are tweaked.
The original BERT implementation performs masking during data preprocessing, which results in a single
static mask. This approach was contrasted with dynamic masking, in which a new masking pattern is created
each time a sequence is fed to the model. Dynamic masking was on par with or slightly better than static
masking.
Table 2: Table
Table 3: Comparison of Performance of the RoBERTa on different tasks with varying Batch Sizes
iv) A larger byte-level BPE:
We use cookies on Analytics Vidhya websites to deliver our services, analyze web traffic, and improve your experience on the site. By using Analytics Vidhya, you agree to our Privacy Policy and Terms of Use.
Accept
https://www.analyticsvidhya.com/blog/2022/10/a-gentle-introduction-to-roberta/ 3/6
1/24/24, 7:18 PM A Gentle Introduction to RoBERTa - Analytics Vidhya
Byte-pair encoding is a hybrid of character-level and word-level representations that can handle the large
vocabularies common in natural language corpora. Instead of using the character-level BPE vocabulary of
size 30K used in BERT, RoBERTa uses a larger byte-level BPE vocabulary with 50K subword units (without
extra preprocessing or tokenization of the input).
All these findings highlight the importance of previously unexplored design choices in BERT training and help
distinguish the relative contributions of data size, training time, and pretraining objectives.
Key Achievements of RoBERTa
RoBERTa model delivered SoTA performance on the MNLI, QNLI, RTE, STS-B, and RACE tasks (back then) and
an improved performance gain on the GLUE benchmark. With a score of 88.5, RoBERTa took the top spot on
the GLUE leaderboard.
Accept
https://www.analyticsvidhya.com/blog/2022/10/a-gentle-introduction-to-roberta/ 4/6
1/24/24, 7:18 PM A Gentle Introduction to RoBERTa - Analytics Vidhya
#Importing the necessary packages
import torch
from transformers import RobertaTokenizer, RobertaForSequenceClassification
#Retrieving the logits and using them for predicting the underlying emotion
with torch.no_grad():
logits = model(**inputs).logits
predicted_class_id = logits.argmax().item()
model.config.id2label[predicted_class_id]
The output is “Optimism,” which is right given the pre-defined labels of the classification model we used. We
can use another pre-trained model or fine-tune one to get results that output more appropriate labels.
Conclusion
To summarize, in this article, we learned the following:
1. The RoBERTa model shares the BERT model’s architecture. It is a reimplementation of BERT with some
modifications to the key hyperparameters and tiny embedding tweaks.
2. RoBERTa is trained on a massive dataset of over 160GB of uncompressed text instead of the 16GB
dataset originally used to train BERT. Moreover, RoBERTa is trained with i) FULL-SENTENCES without NSP
loss, ii) dynamic masking, iii) large mini-batches, and iv) a larger byte-level BPE.
3. Most performance gains result from better training, more powerful computing, or increased data. While
these have value, they often trade off computation efficiency for prediction accuracy. There is a need to
develop more sophisticated, capable, and multi-task fine-tuning methods that can improve performance
using fewer data and computing power.
4. 4. We can not directly compare XLNET and RoBERTa since we will need to train XLNET on data used for
training the RoBERTa model.
That concludes this article. Thanks for reading. If you have any questions or concerns, please post them in
the comments section below. Happy learning!
Link to Research Paper: https://arxiv.org/pdf/1907.11692.pdf
Link to Original Code: https://github.com/pytorch/fairseq
The media shown in this article is not owned by Analytics Vidhya and is used at the Author’s discretion.
Artificial Intelligence blogathon google RoBERTa Achievements Roberta Model
Drishti Sharma
Drishti Sharma 09 Nov 2022
Beginner BERT
Submit reply
We use cookies on Analytics Vidhya websites to deliver our services, analyze web traffic, and improve your experience on the site. By using Analytics Vidhya, you agree to our Privacy Policy and Terms of Use.
Accept
https://www.analyticsvidhya.com/blog/2022/10/a-gentle-introduction-to-roberta/ 5/6
1/24/24, 7:18 PM A Gentle Introduction to RoBERTa - Analytics Vidhya
Company Discover
About Us Blogs
Contact Us Expert session
Careers Podcasts
Comprehensive Guides
Learn Engage
Free courses Community
Learning path Hackathons
BlackBelt program Events
Gen AI Daily challenges
Contribute Enterprise
Contribute & win Our offerings
Become a speaker Case studies
Become a mentor Industry report
Become an instructor quexto.ai
Download App
Terms & conditions Refund Policy Privacy Policy Cookies Policy © Analytics Vidhya 2023.All rights reserved.
We use cookies on Analytics Vidhya websites to deliver our services, analyze web traffic, and improve your experience on the site. By using Analytics Vidhya, you agree to our Privacy Policy and Terms of Use.
Accept
https://www.analyticsvidhya.com/blog/2022/10/a-gentle-introduction-to-roberta/ 6/6