AI has spurred a technological renaissance in recent years. With its revolutionary power, this growing field has transformed several industries. AI has a huge impact, from self-driving cars navigating metropolitan streets to AI-generated art and music that rivals human ingenuity.

AI is altering transportation, artistic expression, and cybersecurity as its influence grows. Although this move promises to boost productivity and effectiveness, it also presents security teams with problems that can keep them up at night. We’ll examine AI’s cybersecurity drawbacks and, more importantly, how to overcome them in this post.

Introduction

Start with the basics and specify the terms to level set:

Definition of cybersecurity

Cybersecurity protects networks, devices, and data from unauthorised access and unlawful use and ensures information confidentiality, integrity, and availability.

Artificial intelligence definition

Artificial intelligence (AI) is the ability of robots to execute human-like tasks like learning and problem-solving.2 This is a broad term, especially since humans do many things. It’s vital to remember that AI includes various techniques, subfields, and applications, such as machine learning, natural language processing, neural networks, and more.

Brief history of AI in cybersecurity

In recent years, the cybersecurity industry has drastically grown its usage of AI technology and algorithms. The broad definition of AI makes it hard to pinpoint the start of this trend, but product leaders have been steadily finding new and innovative ways for machines to perform security tasks previously performed by humans, such as improving cybersecurity postures, triaging alerts, investigating cyber attacks, and more.

Key cybersecurity AI milestones:

  • The 1980s saw rule-based anomaly detection systems like firewalls.
  • Big Data was introduced in the 2000s, allowing massive data storage and analysis.
  • Late 2000s: Supervised threat detection and prevention learning algorithms.
  • 2010s
  • Unsupervised learning algorithms found anomalies and new risks.
  • Deep learning processes large volumes of data and finds complicated patterns.
  • Behavioural modelling and analytics are used in endpoint detection, network analytics, SIEMs, and more.
  • Natural language processing (NLP) improves text analysis and social engineering detection.

The buzz about ChatGPT and Google Bard in the 2020s has led many security providers to integrate “generative AI” like ChatGPT integrations, Large Language Models (LLMs), and other advanced natural language processing capabilities to their products.

Potential AI Cybersecurity Drawbacks

The rise of LLMs in cybersecurity solutions has significant drawbacks.

Expertise is often needed.

LLM solutions let users to ask natural language questions of a graph database data set to gain insights into an organisation, its surroundings, risks, indications of compromise (IoC), etc. Instead of crafting a query, an analyst might ask the AI to “tell me everything you know about this IP address” or “is this IoC present in my environment?”. This method converts analysts’ natural language questions into specialised queries that answer them.

LLMs may simplify SecOps processes like triage, investigation, and threat hunting, but analysts must know what questions to ask, in what order, and have enough experience to interpret the results and turn them into containment and remediation. LLMs improve performance slightly, but they do not lower the analyst entry barrier.

Privacy Concerns

Instead of developing LLMs or AI systems, several security businesses use ChatGPT into their solutions. Security leaders should avoid products with these connectors since ChatGPT (and related technologies) never forgets. This information is stored indefinitely and used to train security tool models, which may include proprietary data, threat assessments, IoCs, etc. This could provide future ChatGPT users access to critical corporate data.

Mistaken Results

Each AI is different, but LLM-based solutions’ bias and hallucinations might strongly limit their cybersecurity applicability.

LLM bias
Bias—unintentional data poisoning—is a major issue in LLMs. LLM bias occurs when AI systems show preference or discrimination, usually reflecting their training data. Understand that these biases are unintended reflections of the training data, not willful AI opinions. LLM hallucinations may worsen these prejudices since the AI may draw on biassed patterns or stereotypes in its training data to provide contextually relevant replies.

LGM hallucinations
LLM hallucinations may affect accuracy. LLMs use “hallucination” to describe when the model produces erroneous, nonsensical, or fake text. Incomplete or contradicting data sets and guesswork from ambiguous cues might create this. Whatever the cause, irrational, fictitious, or senseless cybersecurity responses might be problematic.

No Transparency

Many recent AI solutions are “black boxes” that don’t explain how they work or how they draw conclusions. End users receive different levels of detail from each vendor and solution, but a lack of openness raises doubts about accuracy and reliability. This topic is especially important in cybersecurity because security tool output affects organisation security.

AI Risk Reduction Steps

This paper is not meant to deter security experts from using AI in cybersecurity products, but rather to identify potential concerns and suggest solutions. This section will discuss how security products can avoid the following issues and help security teams choose cybersecurity AI solutions.

Detailing activity records increases openness.

Security leaders should seek AI solutions that reveal how conclusions were reached, what action was done, what source data was used, and where threat intelligence was employed. Open transparency boosts trust and comprehension of system results.

If an AI were to triage security alerts from various security tools, it would be important to know what checks were performed, in what order, what the results of each check were, and what information led the system to conclude that something is malicious. If something is benign, an audit trail of all inspections can support that decision and even prove to an auditor that an alarm was a false positive.

The audit trail of an automated alert triage engine reveals what was done and how conclusions were drawn.

Use a proprietary AI instead of ChatGPT.

Choose a solution with a proprietary AI system from a security-focused firm. Because they are designed to solve unique, unpleasant security team issues, proprietary AI tools are more likely to streamline and automate security tasks. These products are also less likely to use consumer products like ChatGPT, Microsoft Co-pilot, and Google Bard. This protects your security incident data, intellectual property, IoCs, and more from being permanently included in the ChatGPT training data set and visible to others.

Effective AI doesn’t need to be generative.

While generative AI and LLMs are popular, other AI can handle SecOps jobs well. Many AI applications are less sensitive to LLM issues like data bias and hallucinations. A cybersecurity AI system may be built to triage security alerts by replicating the question and answer processes security analysts use to evaluate alerts and conduct incident investigations. This application focuses on understanding security alert contents rather than human instructions, so it does not need LLM NLP capabilities.

Even without being generative, AI can successfully scrutinise security alerts by replicating the question and answer process a human user would use to determine if an alert is malicious, perform an in-depth investigation, and obtain answers from data sources like MITRE ATT&CK, CIS, threat intelligence feeds, learned environment behaviour, and activity benchmarks for the alerted environment.

Blending techniques improves precision.

Effective use of LLMs, machine learning models, heuristics, and other AI approaches yields the best security operations results. The order and mix of these methods affects accuracy. A late 2010s UEBA tool’s behavioural model would inquire, “What abnormal credential use exists within my organisation?”While informative, this information will yield many false positives due to anomalous user behaviour. Travelling, using VPNs to watch Netflix, logging in from new locales, getting new gadgets, transferring departments, etc. can all affect an employee’s risk score.

However, “what abnormal credential use exists within my organisation?”” from a smaller, more targeted organisation cross section would likely yield better results. A system would have a better chance of finding a true positive if it asked this question after determining that a set of users had received, opened, and clicked on a phishing email and entered their credentials into the phishing site. AI and machine learning algorithms can build on one other to improve accuracy. For SecOps triage and investigation, connecting security data sources can improve conclusions.

Target uncomfortable tasks to boost analyst productivity.

Effective AI solutions frequently target specialised and routine tasks. For security operations, AI can offer analysts with decision-ready results for specific tasks in each SecOps incident lifecycle step. Question and answer replication, outlined earlier in this piece, can be used to identify alert maliciousness, incident investigation, and incident scope and root cause. Programmatically translating this information into a customised response plan helps SOC analysts address each security issue using best practices.

Security analysts receive incidents with a full overview, root cause information, and a detailed response plan, allowing them to comprehend and act on them. This method eliminates the need to ask LLMs for query generation and enhances SOC productivity.

Conclusion

Modern AI cybersecurity solutions help cybersecurity professionals. These solutions boost productivity and can help SOCs fill their security expert deficit. These systems can detect cybersecurity risks, cross-reference security information against threat intelligence tools in real time, and enable human teams detect, investigate, and respond to cyber attacks, but they have drawbacks. To maximise these tools, security professionals should avoid these mistakes.

Possible Drawbacks

High entrance barriers—using techniques that only boost productivity for security experts.
Privacy issues—using tools like ChatGPT that could expose your company data or threat information in later editions.
Inaccurate findings — utilising tools, especially LLM-based ones, that may be vulnerable to data bias and hallucinations could affect your security team.
Black boxes—choosing instruments with little to no information about actions or conclusions.
Risk Reduction Steps
Choose an AI-built vendor over one that interfaces with ChatGPT.
Generative AI and NLP aren’t the only ways AI can streamline your SOC. Find tools to reduce your SOC’s main pain spots and time suckers to boost productivity.
Find solutions that combine multiple AI approaches to increase accuracy and results.
Choose tools that disclose their actions, results, and reasoning.

How ManageX helps

Learn how ManageX Security’s AI-powered SOC co-pilot helps improve analyst productivity, detect more genuine assaults, and lower SOC response times at https://managex.ae