
GUEST OPINION: For some years now, cybersecurity personnel have been excited about the impact that artificial intelligence (AI) can have on improving IT security. Every new AI technology brings a fresh wave of interest and enthusiasm and new ways to apply it to bolster network defences, from machine learning (ML) to Natural Language Processing (NLP) and generative AI such as ChatGPT.
Amongst all this enthusiasm, it’s worth remembering that AI isn’t purely positive. Like all new technology, the recent democratization of AI raises additional challenges and difficulties for cybersecurity, while at the same time delivering innovative solutions to complex issues.
To help cybersecurity teams be prepared for whatever AI brings their way, we’ve gathered seven ways that AI can both assist and obstruct cybersecurity in 2024.
AI helps by closing security gaps faster
Knowledge gaps are serious handicaps for cyber GRC teams, who need to be able to gather and present data showing their GRC profile. NLP-powered conversational interfaces help close these gaps. You can use natural language to ask AI for guidelines to evidence collection, and receive suggestions about steps to take for remediation.
Cypago conducts AI-powered User Access Reviews on behalf of cyber GRC teams. It collates information about users and their permission levels, saving a lot of time and effort for IT security professionals. The solution also enables admins to change access permissions for any tech tool within the Cypago app, using natural language commands.
AI hurts by improving phishing personalization
Phishing attacks are already the scourge of cybersecurity teams. The best defence includes thorough and frequent employee education. It only takes one person to fall for one well-worded phishing message, and the entire corporation could be on its knees.
Generative AI is able to develop highly authentic text with personal details drawn from publicly available information. These phishing messages can be just as convincing as those created by humans, and can be prepared in bulk within a matter of minutes, using minimal prompts. Cyber attackers exploit these methods to deploy dangerously effective phishing campaigns, often running numerous variations until they find one that works.
AI helps by delivering better continuous monitoring
IT teams always need to be on top of emerging risks and aware of new vectors of attack, but it’s increasingly difficult to keep up with rising threats in the ever-expanding web. AI monitoring tools can run ceaselessly in the background to analyze far more data than any human team could manage and highlight potential compliance gaps and cyber risks.
This is the role carried out by Flare’s AI assistant to enable prompt and effective responses. The platform automatically and continually scans both the surface web and the dark web to detect security exposures, and instantly carries out autonomous takedowns. The system notifies relevant security professionals using real-time alerts with automated event contextualization that draws on AI language models.
AI hurts by increasing code vulnerabilities
Like all powerful tools, AI can be exploited by malicious actors. Large language models (LLMs), which form the basis of all generative AI tools, can be used to produce problematic code that wreaks havoc on organizations and individuals. Some 31% of organizations are already using generative AI to write code, according to one recent study.
Such code might expose sensitive information from online databases, delete critical data, disrupt cloud services, or cause other damaging effects. When using generative AI to create apps, it’s always best to perform rigorous quality assurance checks before deployment.
AI helps by identifying patterns and anomalies
Traditional anomaly detection methods, like manual inspection and threshold-based techniques, are limited, presenting high rates of false alarms and human error. AI overcomes these drawbacks by incorporating machine learning algorithms that can digest large amounts of data very quickly and adapt to evolving trends, resulting in more accurate detection of anomalies. When the system finds outlier data, it alerts administrators and can even take predefined automatic actions, like suspending a user session or shutting down a server.
For example, Darktrace’s “Detect” feature analyzes every user and server in a company to learn the organization’s unique version of “normal” operations. AI is employed to monitor thousands of metrics to pick up on patterns – and thus, also subtle deviations from them, which may signal evolving threats. It can connect the dots between numerous singular events and reduce them to a handful of high-priority incidents for security teams to review.
AI hurts by obfuscating data protection
Chatbots and custom GPTs have proven revolutionary in enabling organizations to deliver timely support to employees and customers around the clock. However, the huge uptake has its downside. Chatbots gather and store data from millions of people, much of it potentially sensitive, subsequently using that data to inform AI models. Once gathered, this data can be leaked or hacked.
Additionally, it’s not always clear how AI technology processes data. Users might not realize that information that they share solely for internal reference can also be extracted by nefarious actors and used to cause harm.
AI helps by improving pen testing
AI can add turbo power to penetration testing by collating information and determining the best course of action, like identifying which host to attack first or what method to use. The results can be fed back into the AI model to generate new and more effective alternatives. AI can also analyze pen test results to generate actionable insights for improving defences.
IT teams can use PentestGPT to identify security vulnerabilities that may otherwise go unnoticed. It uses AI and NLP to deliver an automated security report that directs professionals to possible security issues. However, these types of capabilities can be used for good or for ill, as cybercriminals can also use AI-assisted pen testing techniques to successfully carry out real data breaches.
Using AI to boost defences, sometimes against AI
As in every field, in cybersecurity, it makes a big difference to be forewarned about trending challenges. Cybersecurity teams can and should plan to leverage AI for continuous monitoring, closing security gaps, and more, but also to prepare strategies to overcome the obstacles that AI places in their path.