Cyber ​​security

Credit: Pixabay/CC0 Public Domain

Artificial intelligence (AI)-powered chatbots may pass the cybersecurity test, but don’t rely on them for complete protection.

That’s the conclusion of a recent paper co-authored by University of Missouri researcher Prasad Kalim and colleagues at Amrita University in India. The team tested two leading generative AI tools — OpenAI’s ChatGPT and Google’s Bard — using a standard certified ethical hacking test.

Certified ethical hackers are cybersecurity professionals who use the same tricks and tools as malicious hackers to find and fix security flaws. Ethical hacking exams measure a person’s knowledge of different types of attacks, how to protect systems and how to respond to security breaches.

ChatGPT and Bard, now Gemini, are advanced AI programs called large language models. They generate human-like text using networks with billions of parameters that allow them to answer questions and create content.

In the study, Clem and team tested the bots with standardized questions from a certified ethical hacking exam. For example, they challenged AI tools to describe a man-in-the-middle attack—an attack in which a third party intercepts communication between two systems. Both were able to explain the attack and offered advice Safety measures On how to prevent it.

Overall, Bard slightly outperformed ChatGPT in terms of accuracy while ChatGPT performed better in terms of comprehensiveness, clarity and comprehensiveness, the researchers found.

“We put them through several scenarios from the exam to see how far they would go in terms of answering the questions,” said Kellem, the Gregg L. Gilliam Professor of Cybersecurity in Electrical Engineering and Computer Science at Mizzou. .

“Both passed the test and got good answers that were understandable to someone with a cyber defense background—but they were also giving the wrong answers. And in cybersecurity, there’s no room for error. If you poke all the holes If you don’t plug in and rely on potentially harmful advice, you’ll be attacked again and it’s dangerous if companies think they’ve solved a problem but haven’t.”

The researchers also found that when the platforms were asked to confirm their answers such as “Are you sure?” Both systems changed their responses, often correcting previous mistakes. When asked for advice on how to attack computer systems from programs, ChatGPT referred to “ethics” while Bard replied that the program was not designed to help with this type of question.

Clem doesn’t believe these tools can replace human cybersecurity experts to design robust cyber defense measures, but they can provide basic information for individuals or small companies who need immediate help.

“These AI tools can be a good starting point for investigating problems before consulting an expert,” he said. “They can also be good training tools for practitioners. Information technology or who want to learn the basics of identifying and defining emerging threats.”

The smartest part? AI tools will only continue to improve their capabilities, he said.

“Research shows that AI models have the potential to contribute to ethical hacking, but more work is needed to fully exploit their potential,” Klemm said. “Ultimately, if we can vouch for their integrity as ethical hackers, we can improve overall cybersecurity measures and trust them to help us make our digital world safer and more secure.” can.”

The study, “ChatGPT or Bard: Who is a better Certified Ethical Hacker”, was published In the May issue of the journal Computers and Securitythe co-authors were Raghu Raman and Krishnashree Achuthan.

More information:
Raghu Raman et al, ChatGPT or Bard: Who is the Better Certified Ethical Hacker? Computers and Security (2024). DOI: 10.1016/j.cose.2024.103804

Reference: AI Chatbots Can Pass Certified Ethical Hacking Exams, Study Finds (2024, July 9) Accessed July 9, 2024 at https://techxplore.com/news/2024-07-ai-chatbots-certified-ethical- Retrieved from hacking.html

This document is subject to copyright. No part may be reproduced without written permission, except for any fair dealing for the purpose of private study or research. The content is provided for informational purposes only.



Source link