ChatGPT 4: a way to create malware without prior knowledge

by Eduard Bardají on Sep 23, 2024 8:52:06 AM

<span id="hs_cos_wrapper_name" class="hs_cos_wrapper hs_cos_wrapper_meta_field hs_cos_wrapper_type_text" style="" data-hs-cos-general-type="meta_field" data-hs-cos-type="text" >ChatGPT 4: a way to create malware without prior knowledge</span>

In recent years, the cybersecurity sector has seen significant changes that have altered the way cyberattacks are launched against companies. One of these major changes has been Artificial Intelligence

Artificial Intelligence, although it sometimes helps us with tedious tasks in our daily lives, has also enabled cybercriminals with limited technical knowledge to launch complex cyberattacks that are difficult to detect.

eBook - Most Dangerous Cyber Attacks

How is ChatGPT used to launch cyberattacks?

ChatGPT 4 is an advanced version of the language model developed by OpenAI, with better understanding and generation capabilities in natural language. It also has a larger database with more recent knowledge. In short, this tool is much more useful in all areas where it can be applied, both for doing 'good' and for doing 'bad,' that is, carrying out lawful or unlawful actions.

Malware generation

One of the most commonly used tools by cybercriminals to create malware is ChatGPT 4, this powerful AI is capable of generating malicious code. Although the tool is designed not to generate text that could be dangerous to the user, it can allow someone without technical knowledge to develop all kinds of scripts, bypassing existing restrictions.

Therefore, if a cybercriminal doesn't know how to create malicious code for their cyberattacks, as long as they know what to ask for and how to do it, the tool takes care of the rest.

Detection evasion

The tool can assist in generating text so that once the attack is launched, it can bypass the spam filters and security of the targeted system, thereby increasing the effectiveness of the cyberattack.

Exploitations of vulnerabilities

A recent study at the University of Illinois in the United States has shown that ChatGPT 4 is capable of exploiting zero-day vulnerabilities with 87% accuracy, meaning security flaws in the system for which there is no solution or patch available. The result is particularly alarming when compared to its previous version, ChatGPT 3.5, which was unable to do so with the same tests.

Phishing techniques

Apart from malware generation, ChatGPT can also be used for phishing attacks and their derivatives. The tool is very effective at generating texts that can be used for mass email campaigns to try to steal the credentials and passwords of a company's employees, which can then lead to another, more harmful cyberattack.

Additionally, ChatGPT has an API that allows other artificial intelligences to draw from its extensive database. This can be very beneficial if used correctly, but it can also be used to deceive users and persuade them to take actions that are harmful to themselves.

Examples of cyberattacks launched with Artificial Intelligence

Next, we will look at real examples of cyberattacks launched with the help of AI. Generally, these involve phishing attacks, as it is one of the launch techniques where AI has the most impact due to its ability to quickly generate persuasive texts.

Phishing scam targeting a Revolut employee

Revolut, a fintech company known for its innovative financial services, fell victim to a phishing attack through Social Engineering. An employee was targeted in a scam that gave the cybercriminal access to the British company's system. According to the company itself, this intrusion affected over 50,000 users, leaking their addresses, phone numbers, full names, and also gaining access to partial payment card data.

Fake job offers on LinkedIn

Thanks to text generation by an Artificial Intelligence like ChatGPT, fraudulent job offers were created to steal credentials or access keys to systems of recognized companies.

Cybercriminals impersonated large companies to attract the attention of their victims, and through malicious links, they directed users to fraudulent portals to steal their private information.

Medidas para evitar ciberataques lanzados con ChatGPT 4

Cyberattacks launched with ChatGPT are not infallible and can be prevented. One of the main ways to prevent this type of cyberattack is through training all employees of a company to learn how to identify and report messages that may be fraudulent. In addition to training employees, it is also essential to implement specific cybersecurity tools to prevent security breaches or cyberattacks.

At ESED, we help you implement all the necessary measures to combat any cyberattack. We offer ESED Training, specific training for companies where we teach best IT practices to protect information and the security of the business.

We have also developed our own cybersecurity tools. The first, Petam.io, is an automatic online scanner to search, detect, and analyze security breaches and vulnerabilities on websites. And our latest innovation, WWatcher, is a tool that prevents data leaks and breaches by limiting file downloads within your company.