Can AI force a rethink on cybersecurity hiring?
हिंदी में सुनें
Listen to this article in Hindi
A Stanford study reveals that an AI agent, ARTEMIS, rivals human cybersecurity experts in detecting IT weaknesses, prompting a hiring strategy rethink.
As artificial intelligence continues to advance, companies are exploring new avenues for safeguarding their digital assets. A recent study from Stanford University suggests that AI could significantly impact cybersecurity hiring practices.
The study demonstrates that an AI agent can effectively compete with, and sometimes outperform, experienced human hackers in identifying security vulnerabilities within IT systems.
The AI system, named ARTEMIS, was pitted against ten cybersecurity professionals in a controlled experiment. ARTEMIS achieved second place overall, successfully detecting security flaws that some human testers overlooked, all while simultaneously handling multiple tasks.
ARTEMIS spent 16 hours scanning Stanford's public and private computer science networks, examining thousands of devices for potential weaknesses. The results showed that ARTEMIS performed better than the majority of human testers and at a significantly lower cost, according to Business Insider.
The study highlights the cost-effectiveness of AI in cybersecurity. Running ARTEMIS costs approximately $18 per hour. This is considerably less than the average annual salary of a professional penetration tester, which is around $125,000. Even a more advanced AI version, costing $59 per hour, remains a more economical option compared to hiring a top-tier human expert.
The research team behind ARTEMIS included Stanford researchers Justin Lin, Eliot Jones, and Donovan Jasper. They developed ARTEMIS after observing that existing AI tools struggled with complex and lengthy security tasks.