The BulrushesThe Bulrushes
  • Home
  • News
    • General
    • Politics
    • World
  • APO Releases
  • Business
  • Sport
    • Athletics
    • Basketball
    • Boxing
    • Cricket
    • Football
    • Rugby
    • Netball
    • Swimming
    • Tennis
  • Entertainment
  • Bookmarks
Search
  • Crime
  • Health
  • Lifestyle
  • Science
  • Weird World
  • Company Profile
  • Contact Us
  • Privacy Policy
Copyright © 2026 The Bulrushes
Reading: ChatGPT: What Dangers Lurk Behind Impressive New Technology? 
Share
Notification Show More
Font ResizerAa
The BulrushesThe Bulrushes
Font ResizerAa
Search
  • Home
  • SA National Elections 2024
  • News
    • General
    • Politics
    • World
  • Sport
    • Athletics
    • Basketball
    • Boxing
    • Cricket
    • Football
    • Netball
    • Rugby
    • Swimming
    • Tennis
  • Bookmarks
    • Customize Interests
    • My Bookmarks
  • The Bulrushes
    • Company Profile
    • Contact Us
    • Privacy Policy
Follow US
Copyright © 2026 The Bulrushes
The Bulrushes > Columns > ChatGPT: What Dangers Lurk Behind Impressive New Technology? 
Columnsfeatured

ChatGPT: What Dangers Lurk Behind Impressive New Technology? 

Anna Collard
Anna Collard
Published: February 16, 2023
Share
4 Min Read
NO MALICE INTENDED: Anna Collard, SVP Content Strategy & Evangelist KnowBe4 Africa asks what cybersecurity dangers lurk behind this impressive new technology?
SHARE

It is now possible to use a publicly available artificial chatbot to generate a complete infection chain, possibly beginning with a spear phishing email written in entirely convincing, human-like language and eventually causing a complete takeover of a company’s computer systems.

Researchers at Checkpoint recently created such a plausible phishing email as a test. 

They only used ChatGPT, a chatbot that uses deep learning techniques to generate text and conversations that can convince basically anyone that it was written by a real person.

In reality, there are many potential cybersecurity dangers wrapped up in this impressive technology developed by OpenAI and currently available online for free.

Here are just a few of them:

  • Social engineering: ChatGPT’s powerful language model can be used to generate realistic and convincing phishing messages, making it easier for attackers to trick victims into providing sensitive information or downloading malware.
  • Scamming: The generation of text through ChatGPT’s language models allows attackers to create fake ads, listings and many other forms of scamming material.
  • Impersonation: ChatGPT can be used to create a convincing digital copy of an individual’s writing style, allowing attackers to impersonate their target in a text-based setting, such as in an email or text message.
  • Automation of attacks: ChatGPT can also be used to automate the creation of malicious messages and phishing emails making it possible for attackers to launch large-scale attacks more efficiently.
  • Spamming: The language model can be fine-tuned to produce large amounts of low-quality content, which can be used in a variety of contexts, including as spam comments on social media or in spam email campaigns.

All five points above are legit threats to companies and all internet users that will only become more prevalent as OpenAI continues to train its model.

If the list managed to convince you, the technology succeeded in its purpose, although in this instance not with malicious intent.

All the text from points one to five was actually written by ChatGPT with minimal tweaks for clarity. The tool is so powerful it can convincingly identify and word its own inherent dangers to cybersecurity.

However, there are mitigating steps individuals and companies can take, including new-school security awareness training. 

Cybercrime is moving at light speed. 

A few years ago, cybercriminals used to specialise in identity theft, but now they take over your organisation’s network, hack into your bank accounts, and steal tens or hundreds of thousands of rands.

An intelligent platform like ChatGPT may have been created with the best intentions, but it only adds to the burden on internet users to always stay vigilant, trust their instincts and always know the risks involved in clicking on any link or opening an attachment.

*The writer of this article, Anna Collard is SVP Content Strategy & Evangelist at KnowBe4 Africa. The views expressed by Anna Collard are not necessarily those of The Bulrushes.

YOU CAN TRY IT: ChatGPT: Optimizing Language Models for Dialogue (openai.com)

Support The Bulrushes PayPal Logo
Share This Article
Facebook Whatsapp Whatsapp LinkedIn Email Copy Link
Share
What do you think?
Love0
Sad0
Surprise0
Angry0
Happy0
Previous Article Kinshasa’s Mass Transit System To Be Rebuilt By AFC, TCC
Next Article Sustainable Fashion A Trending Topic: Is Green The New black?

Stay Connected

FacebookLike
XFollow

Latest News

Creative Easter Swaps – For All The Fun Without The Sugar Rush
Sponsored
April 4, 2026
Health Minister Dr. Motsoaledi Mourns The Death Of Mankweng Conjoined Twin
News
April 3, 2026
Bound For The Moon: The Artemis II Crew’s First Steps Into Deep Space
Features
April 3, 2026
Limpopo Police In High Speed Chase: Illicit Cigarettes Worth R60 000 Recovered
News
April 3, 2026
//

The Bulrushes prides itself on real news you can trust. We keep everything simple – no fudging.

  • Company Profile
  • Contact Us
  • Privacy Policy
  • News
  • Politics
  • General
  • World
  • Athletics
  • Basketball
  • Boxing
  • Cricket
  • Football
  • Netball
  • Rugby
  • Swimming
  • Tennis
The BulrushesThe Bulrushes
Follow US
Copyright © 2026 The Bulrushes