Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 13

POORNIMA COLLEGE OF ENGINEERING

DEPARTMENT OF INFORMATION TECHNOLOGY


An
SEMINAR PRESENTATION
On
Next Word Prediction using Deep Learning

Presented to: Presented By:


Ms. Shazia Haque Student Name: Sanyam Modi
Mrs. Seeta Gupta Reg. No.: PCE19IT051
(Faculty Coordinators)
1
TABLE OF
CONTENT
• Abstract
• Introduction
• Technology
• LSTM & BiLSTM
• Model Accuracy
• Result Analysis
• Conclusion
• Advantage
• Future Scope
• Reference

2
ABSTRACT
Author:
• Dr. Sanjeev Thakur
• Mr. Milind Soam
Next Word Predicition means Predicting the next word in the sentence
which a user is currently typing. With the help of this model we will be
able to predict the next word for the user which means user will not have
to type the nxt word thus reducing the keystrokes pressed by user means
inceresing the typing speed and less grammatical errors. For making this
model we will be using two deep learning algorothims such as
LSTM(long short term memory) and BiLSTM(Bidirectional long short
term memory). The accuracy for lstm was found to be 58.27% and for
BiLSTM was 66.1%. 3
Natural Language Generation (NLG) focuses on the generation of' natural,
human-interpretable language. This study proposes a novel methodology to
INTRODUCTION
Artificial Neural Networks are part of the deep learning that are
inspired by the human neurons. Because of ANN there were
significant milestones in all the fields resulting in bringing
evolution to those fields. These milestones were achieved
because it is the backbone of deep learning.
Artificial Neural Networks can be interpreted as computing
system which mimics the working of a human brain. With the
help of this technology various advancement can be mode
possible in the field of science.It help us to find the relationship
between the sets of data. Which is very useful in this model as 4

data is one of the most important part of any model and with the
LSTM
It is a version of RNN which is a sequential network that allows
information to be persistent. It solves the problem of vanishing
gradient in RNN . Basically RNN is used for persistent memory. As
humans remember what is going to happen further when watching a
video or while reading a book they know what would be happening
further ; similarly RNNs also work in the same way and remember the
previous information to use it for processing the current input .
Long short term memory or as i like to call it the Solution To
Long Term Dependencies. Lstm solves our problem of
vanishing gradients as we humans can remember what 5 is
going to happen while watching a book or reading a novel
LSTM BiLSTM

6
MODEL ACCURACY

7
Result Analysis
CONCLUSION
The conclusion is that BiLSTM performed the best. The result
that BiLSTM model performed better was expected as BiLSTM
was introduced to solve the drawbacks that occurred in LSTM
model . At the end we can see that with BiLSTM we achieved
the highest accuracy as well as had the least loss.
ADVANTAGE
The model developed can be used for predicting the next
word in the Language. This can effectively reduce the number
of words that the user has to type, thus increasing the typing
speed. It also helps in minimizing the spelling mistakes done by
the user. In English speaking countries, this system can be a
boon
FUTURE SCOPE
In the future, the system can be extended for other natural language
generation tasks like story auto-completion, poem auto-completion,
etc.The model is limited to specific dataset and more randomnes
scan be incorporated in the model by enhancing the scope.The
system can be adapted to new words that are not a part of its
vocabulary. This adaption will be done when model encounters a
new word and adding the word to the vocabulary. This way the
model becomes more generalized.The system can be personalized
to predict words based on the user's history.
REFERENCES
[I] P. P. Barman and A. Boruah, "A run based approach for next word pre-diction in assamese phonetic
transcriptio n," Sth International Conference on Advances in Computing and Communication, 2018.
[2] R. Perera and P. Nand, "Recent advances in natural language generation:A survey and classification of the
empirical literature ,"
Computing and Informatics, vol. 36, pp. 1-32,01 2017.
[3] C. Aliprandi, N. Carmignani, N. Deha, P. Mancarella, and M. Rubino, "Advances in nip applied to word
prediction," J. Mol. BioI.,
vol. 147,pp. 195- 197,2008.
[4] C. McCormick, Latent Semantic Analysis (LSA) for Text Classification Tutorial, 20 19 (accessed February 3, 20
19).
http://mccormickml.comJ 20 16/03/25/1sa-for-text-classificationtutorial/.
[5] Y. Wang, K. Kim, B. Lee, and H. Y. Youn, "Word clustering based on pos feature for efficient twitter sentiment
analysis," Humancentric Computing and Information Sciences, vol. 8, p. 17, Jun 2018.
[6] N. N. Shah, N. Bhatt, and A. Ganatra, "A unique word prediction system for text entry in hindi," in Proceedings
ofthe Second International Conference on Information and Communication Technology for Competitive
Strategies, p. 118, ACM, 20 16.
[7] M. K. Sharma and D. Smanta, "Word prediction system for text entry in hindi," ACM Trans. Asian Lang. Inform.
Process, 06 2014.
[8] R. Devi and M. Dua, "Performance evaluation of different similarity functions and classification methods using
web based hindi
language question answering system," Procedia Computer Science, vol. 92,pp. 520-525, 20 16.
[9] S. Hochreiter, "The vanishing gradient problem during learning recurrent neural nets and problem solutions,"
International Journal 3ofUncertainty, Fuzziness and Knowledge-Based Systems, vol. 6, no. 02, pp. 107-116,1998.
[10] D. Pawade, A. Sakhapara, M. Jain, N. Jain, and K. Gada, "Story scram-bler - automatic text generation using 13
word level rnn-lstm,' Modem Education and Computer Science, 2018.
THANK YOU

14

You might also like