Download as pdf or txt
Download as pdf or txt
You are on page 1of 2

Date: June, 07 2022

Plagiarism Scan Report Words Statistics

Words 248

Characters 1823

0% 100%
Plagiarised Unique Exclude URL None

Content Checked For Plagiarism

BERT (Bidirectional Encoder Representations from Transformers) is a deep learning technique


for natural language processing that uses unsupervised language representation and
Transformer-based bidirectional models (a deep learning technique in which each output
element is linked to each input element, and the weightings between them are dynamically set
via the attention mechanism based on their relationship) \cite{abdelgwad2021arabic}.\\ BERT
is a pre-trained model that can analyse languages with high accuracy and successfully
handles ambiguity, which is the most difficult part of understanding natural language.
\subsubsection{BERT as Embedding layer} Unlike typical word2vec embedding layers, which
generate static context-independent word vectors, BERT layers generate dynamic context-
dependent word embeddings by taking the entire phrase as input and retrieving information
to calculate the representation of each word. In this study we used bert-base-arabertv01 pre-
training BERT for Arabic Language \cite{antoun2020arabert}. Arabertv01 pre-training model
which is based on Google's BERT architechture. Bert-base-arabertv01 pre-training model is
collected from Arabic wikipedia, OSIAN (Open Source International Arabic News), Arabic corpus
and available in https://github.com/ aub-mind/arabert/. Pre-trained "Arabic BERT" which was
previously trained on about 8.2 billion Arabic words . The BERT-base model consisting of 12
hidden layers, 12 attention heads, and hidden size of 768 has been particularly used.
\subsection{\textbf{Calculating similarity matrix}} The input of this stage is a set of vectors for
both sentences s1 and s2. Each vector represents a word. At this stage, the similarity between
each word in s1 and all words in s2 is calculated based on the cosine similarity between the
two vectors.

Page 1 of 2
Home Blog Testimonials About Us Privacy

Copyright © 2022 Plagiarism Detector. All right reserved

Page 2 of 2

You might also like