Sunday , March 16 2025

AI for security. Now we need security for AI

After the release of ChatGPT, artificial intelligence (AI), machine learning (ML) and large language models (LLMs) have become the number one topic of discussion for cybersecurity practitioners, vendors and investors alike. This is no surprise; as Marc Andreessen noted a decade ago, software is eating the world, and AI is starting to eat software.

Despite all the attention AI received in the industry, the vast majority of the discussions have been focused on how advances in AI are going to impact defensive and offensive security capabilities. What is not being discussed as much is how we secure the AI workloads themselves.

Researcher found non protected database form ESHYFT containig 86000 records

Cybersecurity researcher Jeremiah Fowler found and reported a non-password-protected database with over 86,000 records belonging to ESHYFT, a New Jersey-based...
Read More
Researcher found non protected database form ESHYFT containig 86000 records

CVE-2024-55591 and CVE-2025-24472
New SuperBlack ransomware exploits Fortinet flaws

Forescout Research- Vedere Labs identified a series of intrusion based on two Fortinet vulnerabilities which began with the exploitation of...
Read More
CVE-2024-55591 and CVE-2025-24472  New SuperBlack ransomware exploits Fortinet flaws

CVE-2025-25291 & CVE-2025-25292
Attention! GitLab Patched Critical Authentication Bypass Flaws

GitLab has released versions 17.9.2, 17.8.5, and 17.7.7 for its Community and Enterprise Editions to fix security vulnerabilities, including a...
Read More
CVE-2025-25291 & CVE-2025-25292  Attention! GitLab Patched Critical Authentication Bypass Flaws

CVE-2025-20138
Cisco released High Security Alert for IOS XR Software

Cisco has issued a security advisory for a high-severity vulnerability in its IOS XR Software, labeled CVE-2025-20138, with a CVSS...
Read More
CVE-2025-20138  Cisco released High Security Alert for IOS XR Software

400+ IPs Exploiting Multiple SSRF Vulnerabilities

GreyNoise warns of a coordinated increase in the exploitation of Server-Side Request Forgery (SSRF) vulnerabilities across various platforms. "At least...
Read More
400+ IPs Exploiting Multiple SSRF Vulnerabilities

NVIDIA has released update for NVIDIA Riva

NVIDIA has released a software update for Riva to fix security vulnerabilities that could allow privilege escalation, data tampering, denial...
Read More
NVIDIA has released update for NVIDIA Riva

CVE-2025-24201
Apple fixes 0-day exploited in “extremely sophisticated attack”

On Tuesday, Apple fixed a critical zero-day vulnerability affecting nearly all supported iPhones and iPads. The company noted that it...
Read More
CVE-2025-24201  Apple fixes 0-day exploited in “extremely sophisticated attack”

Microsoft’s March 2025 updates fix 7 zero-day, 57 flaws

Microsoft's March 2025 Patch Tuesday update fixes 57 flaws, including seven zero-day exploits, six of which are actively being exploited....
Read More
Microsoft’s March 2025 updates fix 7 zero-day, 57 flaws

Ballista Botnet infects 6000 Unpatched TP-Link Routers

Cato CRTL team said, a new botnet campaign dubbed Ballista target the unpatched TP-Link Archer routers. CVE-2023-1389 is a serious...
Read More
Ballista Botnet infects 6000 Unpatched TP-Link Routers

CVE-2025-24813
Flaw in Apache Tomcat Exposes Servers to RCE

A critical vulnerability, CVE-2025-24813, has been found in Apache Tomcat, which could let attackers execute remote code, leak sensitive data,...
Read More
CVE-2025-24813  Flaw in Apache Tomcat Exposes Servers to RCE

Over the past several months, we have seen many cybersecurity vendors launch products powered by AI, such as Microsoft Security Copilot, infuse ChatGPT into existing offerings or even change the positioning altogether, such as how ShiftLeft became Qwiet AI. I anticipate that we will continue to see a flood of press releases from tens and even hundreds of security vendors launching new AI products. It is obvious that AI for security is here.

A brief look at attack vectors of AI systems

Securing AI and ML systems is difficult, as they have two types of vulnerabilities: Those that are common in other kinds of software applications and those unique to AI/ML.

First, let’s get the obvious out of the way: The code that powers AI and ML is as likely to have vulnerabilities as code that runs any other software. For several decades, we have seen that attackers are perfectly capable of finding and exploiting the gaps in code to achieve their goals. This brings up a broad topic of code security, which encapsulates all the discussions about software security testing, shift left, supply chain security and the like.

Because AI and ML systems are designed to produce outputs after ingesting and analyzing large amounts of data, several unique challenges in securing them are not seen in other types of systems. MIT Sloan summarized these challenges by organizing relevant vulnerabilities across five categories: data risks, software risks, communications risks, human factor risks and system risks.

Some of the risks worth highlighting include:

  • Data poisoning and manipulation attacks. Data poisoning happens when attackers tamper with raw data used by the AI/ML model. One of the most critical issues with data manipulation is that AI/ML models cannot be easily changed once erroneous inputs have been identified.
  • Model disclosure attacks happen when an attacker provides carefully designed inputs and observes the resulting outputs the algorithm produces.
  • Stealing models after they have been trained. Doing this can enable attackers to obtain sensitive data that was used for training the model, use the model itself for financial gain, or to impact its decisions. For example, if a bad actor knows what factors are considered when something is flagged as malicious behavior, they can find a way to avoid these markers and circumvent a security tool that uses the model.
  • Model poisoning attacks. Tampering with the underlying algorithms can make it possible for attackers to impact the decisions of the algorithm.

In a world where decisions are made and executed in real time, the impact of attacks on the algorithm can lead to catastrophic consequences. A case in point is the story of Knight Capital which lost $460 million in 45 minutes due to a bug in the company’s high-frequency trading algorithm. The firm was put on the verge of bankruptcy and ended up getting acquired by its rival shortly thereafter. Although in this specific case, the issue was not related to any adversarial behaviors, it is a great illustration of the potential impact an error in an algorithm may have.

AI security landscape

There is a lot about the problem of AI security online, although it looks significantly less compared to the topic of using AI for cyber defense and offense. Many might argue that AI security can be tackled by getting people and tools from several disciplines including data, software and cloud security to work together, but there is a strong case to be made for a distinct specialization.

When it comes to the vendor landscape, I would categorize AI/ML security as an emerging field. The summary that follows provides a brief overview of vendors in this space. Note that:

  • The chart only includes vendors in AI/ML model security. It does not include other critical players in fields that contribute to the security of AI such as encryption, data or cloud security.
  • The chart plots companies across two axes: capital raised and LinkedIn followers. It is understood that LinkedIn followers are not the best metric to compare against, but any other metric isn’t ideal either.

Although there are most definitely more founders tackling this problem in stealth mode, it is also apparent that AI/ML model security space is far from saturation. As these innovative technologies gain widespread adoption, we will inevitably see attacks and, with that, a growing number of entrepreneurs looking to tackle this hard-to-solve challenge.

Closing notes

In a world powered by AI, any unexpected behavior of the algorithm compromised of the underlying data or the systems on which they run will have real-life consequences. The real-world impact of compromised AI systems can be catastrophic: misdiagnosed illnesses leading to medical decisions which cannot be undone, crashes of financial markets and car accidents, to name a few.

Although many of us have great imaginations, we cannot yet fully comprehend the whole range of ways in which we can be affected. As of today, it does not appear possible to find any news about AI/ML hacks; it may be because there aren’t any, or more likely because they have not yet been detected. That will change soon.

Today, we are in a very different place. Although there is not enough security talent, there is a solid understanding that security is critical and a decent idea of what the fundamentals of security look like. That, combined with the fact that many of the brightest industry innovators are working to secure AI, gives us a chance to not repeat the mistakes of the past and build this new technology on a solid and secure foundation.

Will we use this chance? Only time will tell. For now, I am curious about what new types of security problems AI and ML will bring and what new types of solutions will emerge in the industry as a result.

Ross Haleliuk is a cybersecurity product leader, head of product at LimaCharlie and author of Venture in Security.

Check Also

CYFIRMA

FinStealer Malware Targets Indian Bank’s Mobile Users, Stealing Credentials

CYFIRMA analysis reveals a sophisticated malware campaign that exploits a major Indian bank’s brand through …

Leave a Reply

Your email address will not be published. Required fields are marked *