Cyber ​​graphic
Source: Adobe Stock

July 9, 2024
Contact: Janice Haven, [email protected]

Artificial intelligence (AI)-powered chatbots may pass the cybersecurity test, but don’t rely on them for complete protection.

That’s the conclusion of a recent paper co-authored by University of Missouri researcher Prasad Kalim and colleagues at Amrita University in India. The team tested two leading creative AI tools — OpenAI’s ChatGPT and Google’s Bard — using a standard certified ethical hacking test.

Certified ethical hackers are cybersecurity professionals who use the same tricks and tools as malicious hackers to find and fix security flaws. Ethical hacking exams measure a person’s knowledge of different types of attacks, how to protect systems and how to respond to security breaches.

Prasad Kalim
Prasad Kalim

ChatGPT and Bard, now Gemini, are advanced AI programs called Big Language Models. They generate human-like text using networks with billions of parameters that allow them to answer questions and create content.

In the study, Clem and team tested the bots with standardized questions from a certified ethical hacking exam. For example, they challenged AI tools to describe a man-in-the-middle attack — an attack in which a third party intercepts communication between two systems. Both were able to explain the attack and suggested preventive measures to avoid it.

Overall, Bard slightly outperformed ChatGPT in terms of accuracy while ChatGPT performed better in terms of comprehensiveness, clarity and comprehensiveness, the researchers found.

“We put them through a series of test scenarios to see how far they would go in terms of answering questions,” said Greg L. Gilliam, professor of cybersecurity in electrical engineering and computer science at Mizzou. “Both passed the test and had good answers that were understandable to someone with a background in cyber defense — but they were also giving the wrong answer. And in cybersecurity, there’s no room for error. If you plug all the holes If you don’t and rely on potentially harmful advice, you’ll be attacked again. And it’s dangerous if companies think they’ve solved a problem but haven’t.

The researchers also found that when the platforms were asked to confirm their answers such as “Are you sure?” Both systems changed their responses, often correcting previous mistakes. When asked for advice on how to attack computer systems from programs, ChatGPT referred to “ethics” while Bard replied that the program was not designed to help with this type of question.

Clem doesn’t believe these tools can replace human cybersecurity experts to design robust cyber defense measures, but they can provide basic information for individuals or small companies who need immediate help.

“These AI tools can be a good starting point for investigating problems before consulting an expert,” he said. “They can also be good training tools for those working with information technology or those wanting to learn the basics of identifying and defining emerging threats.”

The smartest part? AI tools will only continue to improve their capabilities, he said.

“Research shows that AI models have the potential to contribute to ethical hacking, but more work is needed to fully exploit their potential,” Klemm said. “Ultimately, if we can vouch for their integrity as ethical hackers, we can improve overall cybersecurity measures and trust them to help us make our digital world safer and more secure.” can.”

The study, “Chat GPT or Bard: Who’s the Better Certified Ethical Hacker,” was published in the journal’s May issue. Computers and Security. Co-authors were Raghu Raman and Krishnashree Achuthan.

Source link