Professional Documents
Culture Documents
Abusegpt: Abuse of Generative Ai Chatbots To Create Smishing Campaigns
Abusegpt: Abuse of Generative Ai Chatbots To Create Smishing Campaigns
2024)
Abstract—SMS phishing, also known as “smishing”, is a grow- comprehension of written texts, answer complex questions,
ing threat that tricks users into disclosing private information or generate lengthy coherent stories, translate between languages,
clicking into URLs with malicious content through fraudulent and hold conversations while maintaining context and per-
mobile text messages. In recent past, we have also observed
a rapid advancement of conversational generative AI chatbot sonality. Their versatility, scalability, and ability to achieve
services (e.g., OpenAI’s ChatGPT, Google’s BARD), which are strong performance with self-supervised learning make them
powered by pre-trained large language models (LLMs). These AI extremely powerful [9].
chatbots certainly have a lot of utilities but it is not systematically
understood how they can play a role in creating threats and
In literature, we have found that natural language processing
attacks. In this paper, we propose AbuseGPT method to show (NLP) [10] and URL structure-based features [11] are often
how the existing generative AI-based chatbot services can be leveraged with Machine Learning (ML) [12] to aid the Smish
exploited by attackers in real world to create smishing texts and message detection mechanisms. There are few recent studies
eventually lead to craftier smishing campaigns. To the best of our showing how ChatGPT can be used to carry out various
knowledge, there is no pre-existing work that evidently shows the
impacts of these generative text-based models on creating SMS
social engineering attacks [13]. To aid these attacks, there are
phishing. Thus, we believe this study is the first of its kind to shed jailbreaking prompts; specific strategic commands or inputs
light on this emerging cybersecurity threat. We have found strong that attempt to bypass restrictions, rules, or limitations set
empirical evidences to show that attackers can exploit ethical by the generative AI chatbots. However, the present literature
standards in the existing generative AI-based chatbot services does not provide any details on the impacts of the modern
by crafting prompt injection attacks to create newer smishing
campaigns. We also discuss some future research directions and
LLM-based generative AI chatbot services on helping at-
guidelines to protect the abuse of generative AI-based services tackers to create smishing messages and campaigns. In this
and safeguard users from smishing attacks. paper, we want to understand the effects of the currently
Index Terms—Smishing, abuse, generative AI, LLM, scam available popular conversational generative AI chatbot services
I. I NTRODUCTION in order to generate Smishing messages. The objective is to
comprehensively understand the abusive implications (i.e., use
Smishing, or SMS phishing, refers to phishing attacks case scenarios) of these available state-of-the-art generative
conducted through mobile text messaging. As mobile users AI chatbot services to aid attackers in creating smishing cam-
have grown ubiquitously, smishing has emerged as a popular paigns. To the best of our knowledge, there is no systematic
vector for cyber criminals to steal personal information or study or experiments conducted to showcase the side effects
spread malware [1]. In 2021 alone, SMS phishing has caused of generative AI chatbot services, which can possibly aid
a huge $44 billion in losses just within the United States existing smishing attackers to become craftier and evasive. To
[2]. There has been a 1,265% increase in malicious phishing summarize, we have made the following contributions in this
messages since Q4 2022 and 39% of all mobile-based attacks paper:
in 2023 were SMS Phishing [3]. In recent times, we have also
seen a revolutionary development in the field of large language • Finding out the effective prompts for current popular AI
models that are used underneath the popular generative AI Chatbot services.
ChatBots such as ChatGPT (using GPT3.5 or GPT 4) [4] • Finding the right prompt queries that can receive effective
and BARD (using Gemini “Pro”) [5]. Generative pre-trained responses from AI chatbot services (e.g., ChatGPT) to
models like GPT-4 [6] or LaMDA [7] are powered by deep provide smishing theme and potential example messages.
learning techniques like transformers [8] that allow them to • Finding the right prompt queries that can receive effective
develop a broad understanding of language. These language responses from AI chatbot services to provide details
models can generate highly realistic human-like text while step-wise process and available toolkits in carrying out
pre-trained on large datasets. These models can demonstrate smishing cyberattacks.
• Discussing on potentially enhancing the ethical standards
979-8-3503-3036-6/24/$31.00 ©2024 IEEE of available generative AI chatbot services.
1
The rest of the paper is organized as follows: Section Conversational
AI Weaponize
II highlights the existing literature and some recent studies Chatbots Generate Fake
Domains
Websites with
Fake Domains
showing the attacker’s capabilities with ChatBots. Section III
describes the research questions and overview of the methods Get Themed
Smishing Text
of abuse. Section IV presents the prompt injection scenarios Identify Plan
Jailbreak Chat End-to-End
and shows how AI chatbots (e.g., ChatGPT) can be abused by Prompts Attack Steps
Get
.
attackers to generate smishing campaigns. Section V highlights .
Cybercrime
Toolkits
insightful discussion, defense against smishing, and limitations .
2
Fig. 3. Asking indirectly for a smishing message without jailbreaking
3
sonal information. The response in Fig. 6 has presented some web. We have queried ChatGPT with the following question:
useful thematic ideas for smishing. However, we have carried “What are some toolkits available for this task?”. In response,
over the conversation to get more uncommon ideas to use as ChatGPT provided us with the names of toolkits with their
themes for crafting the scam messages as advanced attackers URLs (see Fig. 11).
may want to create smishing messages with previously unseen
themes. Intriguingly, ChatGPT’s response on that unveiled
a few more uncommon and innovative smishing message
themes as presented in Fig. 7. Furthermore, we have provided
ChatGPT with a list of common phishing message types
and asked for examples of smishing that would be different
from the given categories as shown in Fig 8. In response,
we have received examples of new and unusual categories
as highlighted in Fig 9, showcasing ChatGPT’s capability to
generate diverse and convincing smishing messages. Addition-
ally, it has given us more unethical ideas to gain financially
such as using personal info opening fraudulent bank accounts,
using credit card info to buy resealable goods, or converting
stolen funds into cryptocurrency for increased anonymity. This
revelation highlights that even novice attackers with limited
knowledge could obtain ideas not only for smishing but also
for crafting other online scams. Lastly, we’ve also obtained
a social engineering Kill Chain process [21] for running a
smishing campaign through ChatGPT as shown in Fig. 10. Fig. 8. Prompt for avoiding common smishing
Fig. 7. Uncommon themed ideas for crafting smishing Answering RQ4: We have extended our quest to attach some
deceptive URL links that might work well with the crafted
Answering RQ3: During this scenario, we’ve assumed that SMS texts. ChatGPT eventually has yielded some disguised
an attacker needs toolkits to execute attack steps, and some fake examples resembling popular name brand links depicted
of these toolkits are available online and others in the Dark in Fig. 12. Moreover, asking to associate the fake links
4
Fig. 11. Getting hacking toolkit with URLs
Fig. 10. Social Engineering Kill Chain steps for smishing campaign
5
attackers can compromise AI chatbots on how to avoid typical [3] SlashNext, “Slashnext’s 2023 state of phishing report,”
attacks and be innovative with newer attacks. https://www.prnewswire.com/news-releases/slashnexts-2023-state-
of-phishing-report-reveals-a-1-265-increase-in-phishing-emails-since-
C. Defense Recommendations Against Smishing the-launch-of-chatgpt-in-november-2022–signaling-a-new-era-of-
Dealing with these crafty and ever-changing smishing at- cybercrime-fueled-by-generative-ai-301971557.html, (Accessed on 21
January, 2024).
tacks calls for smart multi-layer defensive strategies. First, [4] OpenAi, “Language models are few-shot learners,” Advances in neural
having a cyber situational awareness of the latest tricks information processing systems, vol. 33, pp. 1877–1901, 2020.
adopted by AI-driven attackers is a must. Second, education [5] Google, “Introducing gemini: our largest and most capable ai model,”
https://blog.google/technology/ai/google-gemini-ai/, (Accessed on 26
and training is also a key element to ensure the users are well January, 2023).
aware about these smishing threats and thus work as a defense [6] OpenAI, “Gpt-4 technical report,” 2023.
shield. Third, every text message containing any URL should [7] R. Thoppilan, D. D. Freitas, J. Hall, N. Shazeer, A. Kulshreshtha, H.-T.
Cheng, A. Jin, T. Bos, L. Baker, Y. Du, Y. Li, H. Lee, H. S. Zheng,
go through a verification process before landing on the user A. Ghafouri, M. Menegali, Y. Huang, M. Krikun, D. Lepikhin, J. Qin,
inbox either through a third-party application or incorporating D. Chen, Y. Xu, Z. Chen, A. Roberts, M. Bosma, V. Zhao, Y. Zhou,
the text-URL verification in future messaging apps (where user C.-C. Chang, I. Krivokon, W. Rusch, M. Pickett, P. Srinivasan, L. Man,
K. Meier-Hellstern, M. R. Morris, T. Doshi, R. D. Santos, T. Duke,
privacy is protected). J. Soraker, B. Zevenbergen, V. Prabhakaran, M. Diaz, B. Hutchinson,
D. Limitations K. Olson, A. Molina, E. Hoffman-John, J. Lee, L. Aroyo, R. Rajakumar,
A. Butryna, M. Lamm, V. Kuzmina, J. Fenton, A. Cohen, R. Bernstein,
There are a few limitations in this study. First, the success R. Kurzweil, B. Aguera-Arcas, C. Cui, M. Croak, E. Chi, and Q. Le,
of prompt injection can be time sensitive and the current “Lamda: Language models for dialog applications,” 2022.
study has been conducted successfully between November [8] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez,
L. Kaiser, and I. Polosukhin, “Attention is all you need,” in Proceedings
2023 and January 2024, which may diminish if ChatGPT starts of the 31st International Conference on Neural Information Processing
enforcing stricter ethical standards in the future for specific Systems, ser. NIPS’17. Red Hook, NY, USA: Curran Associates Inc.,
jailbreak prompts. Second, we have not tried to register 2017, p. 6000–6010.
[9] A. Radford, J. W. Kim, C. Hallacy, A. Ramesh, G. Goh, S. Agarwal,
any fake domains generated by ChatGPT, which might not G. Sastry, A. Askell, P. Mishkin, J. Clark et al., “Learning transferable
be straightforward. Third, we have not evaluated the attack visual models from natural language supervision,” in International
success rate of the AI crafted smish messages against a real conference on machine learning. PMLR, 2021, pp. 8748–8763.
[10] S. Mishra and D. Soni, “Dsmishsms-a system to detect smishing sms,”
human with a controlled user study, which we can explore in Neural Comput & Applic, vol. 35, no. 7, pp. 4975–4992, 2023. [Online].
the future as an extension of the present work. Available: https://link.springer.com/10.1007/s00521-021-06305-y
[11] A. K. Jain, B. B. Gupta, K. Kaur, P. Bhutani, W. Alhalabi, and
VI. C ONCLUSION A. Almomani, “A content and url analysis-based efficient approach
Smishing attacks are serious cyber threats in the current to detect smishing sms in intelligent systems,” Int J of Intelligent
Sys, vol. 37, no. 12, pp. 11 117–11 141, 2022. [Online]. Available:
ecosystem with increasing and diverse mobile users. Moreover, https://onlinelibrary.wiley.com/doi/10.1002/int.23035
the availability of AI chatbots and their lower ethical standards [12] S. M. M. Hossain, J. A. Sumon, A. Sen, M. I. Alam, K. M. A.
make the problem even more severe. Our proposed AbuseGPT Kamal, H. Alqahtani, and I. H. Sarker, Spam Filtering of Mobile
SMS Using CNN–LSTM Based Deep Learning Model, ser. Hybrid
method shows that currently the AI chatbots are vulnerable and Intelligent Systems. Cham: Springer International Publishing, 2022,
pose a threat to run craftier smishing campaigns with very little vol. 420, pp. 106–116. [Online]. Available: https://link.springer.com/10.
knowledge required. AbuseGPT does not intend to promote 1007/978-3-030-96305-7 10
[13] D. Vukovic and I. Dujlovic, “Social engineering with chatgpt,” 03 2023,
these attacks in real-world but highlights the urgent need to pp. 1–5.
strengthen generative AI’s security and prevent these abuse [14] T. F. Blauth, O. J. Gstrein, and A. Zwitter, “Artificial intelligence crime:
use cases. We recommend preventive and proactive actions An overview of malicious use and abuse of ai,” IEEE Access, vol. 10,
pp. 77 110–77 122, 2022.
from both the AI chatbot owners and mobile operators who [15] Y. Liu, G. Deng, Z. Xu, Y. Li, Y. Zheng, Y. Zhang, L. Zhao, T. Zhang,
are also abused for propagating the SMS phishing campaigns. and Y. Liu, “Jailbreaking chatgpt via prompt engineering: An empirical
We strongly feel that SMS advertising ecosystem are often study,” arXiv preprint arXiv:2305.13860, 2023.
[16] G. Deng, Y. Liu, Y. Li, K. Wang, Y. Zhang, Z. Li, H. Wang, T. Zhang,
leveraged by the attackers to broadcast smishing messages and Y. Liu, “Masterkey: Automated jailbreaking of large language model
easily, which needs to be addressed by mobile network chatbots.”
operators to safeguard their mass users. In the future, we [17] M. Gupta, C. Akiri, K. Aryal, E. Parker, and L. Praharaj, “From chatgpt
to threatgpt: Impact of generative ai in cybersecurity and privacy,” IEEE
want to explore how the advancement of generative AI and Access, vol. 11, pp. 80 218–80 245, 2023.
LLMs can aid in the defense spectrum to achieve meaningful [18] N. Begou, J. Vinoy, A. Duda, and M. Korczynski, “Exploring the dark
and contextual explanation-based automated detection of these side of ai: Advanced phishing attack design and deployment using
chatgpt,” 2023.
smishing messages before landing into user inbox. [19] T. Langford and B. Payne, “Phishing faster: Implementing chatgpt into
phishing campaigns,” 11 2023, pp. 174–187.
R EFERENCES [20] “Jailbreak chat,” https://www.jailbreakchat.com/, (Accessed on 22
[1] M. L. Rahman, D. Timko, H. Wali, and A. Neupane, “Users really do December, 2023).
respond to smishing,” in Proceedings of the Thirteenth ACM Conference [21] R. Montanez Rodriguez and S. Xu, “Cyber social engineering kill chain,”
on Data and Application Security and Privacy, ser. CODASPY ’23. in International Conference on Science of Cyber Security. Springer,
New York, NY, USA: Association for Computing Machinery, 2023, p. 2022, pp. 487–504.
49–60. [Online]. Available: https://doi.org/10.1145/3577923.3583640 [22] “Chat gpt ’dan’ (and other ’jailbreaks’),”
[2] CGNET, “Smishing attacks sharply on the rise,” https://gist.github.com/coolaj86/6f4f7b30129b0251f61fa7baaa881516,
https://cgnet.com/blog/smishing-attacks-sharply-on-the-rise/, (Accessed (Accessed on 22 December, 2023).
on 26 November, 2023).