Tuesday , June 24 2025
AI Testing Guide

OWASP AI Testing Guide Launched to Uncover Vulns in AI Systems

OWASP has released its AI Testing Guide, a framework to help organizations find and fix vulnerabilities specific to AI systems. This initiative meets the rising demand for specialized security, privacy, and ethical testing in AI, which is essential for sectors like healthcare, finance, automotive, and cybersecurity.

OWASP is known for its Web Security Testing Guide (WSTG) and Mobile Security Testing Guide (MSTG), but the AI Testing Guide addresses the specific risks of AI applications.

WhatsApp banned on all US House of Representatives devices

The U.S. House of Representatives has banned congressional staff from using WhatsApp on government devices due to security concerns, as...
Read More
WhatsApp banned on all US House of Representatives devices

Kaspersky found “SparkKitty” Malware on Google Play, Apple App Store

Kaspersky found a new mobile malware dubbed SparkKitty in Google Play and Apple App Store apps, targeting Android and iOS....
Read More
Kaspersky found “SparkKitty” Malware on Google Play, Apple App Store

OWASP AI Testing Guide Launched to Uncover Vulns in AI Systems

OWASP has released its AI Testing Guide, a framework to help organizations find and fix vulnerabilities specific to AI systems....
Read More
OWASP AI Testing Guide Launched to Uncover Vulns in AI Systems

Axentec Launches Bangladesh’s First Locally Hosted Tier-4 Cloud Platform

In a major milestone for the country’s digital infrastructure, Axentec PLC has officially launched Axentec Cloud, Bangladesh’s first Tier-4 cloud...
Read More
Axentec Launches Bangladesh’s First Locally Hosted Tier-4 Cloud Platform

Hackers Bypass Gmail MFA With App-Specific Password Reuse

A hacking group reportedly linked to Russian government has been discovered using a new phishing method that bypasses two-factor authentication...
Read More
Hackers Bypass Gmail MFA With App-Specific Password Reuse

Russia detects first SuperCard malware attacks via NFC

Russian cybersecurity experts discovered the first local data theft attacks using a modified version of legitimate near field communication (NFC)...
Read More
Russia detects first SuperCard malware attacks via NFC

Income Property Investments exposes 170,000+ Individuals record

Cybersecurity researcher Jeremiah Fowler discovered an unsecured database with 170,360 records belonging to a real estate company. It contained personal...
Read More
Income Property Investments exposes 170,000+ Individuals record

ALERT (CVE: 2023-28771)
Zyxel Firewalls Under Attack via CVE-2023-28771 by 244 IPs

GreyNoise found attempts to exploit CVE-2023-28771, a vulnerability in Zyxel's IKE affecting UDP port 500. The attack centers around CVE-2023-28771,...
Read More
ALERT (CVE: 2023-28771)  Zyxel Firewalls Under Attack via CVE-2023-28771 by 244 IPs

CISA Flags Active Exploits in Apple iOS and TP-Link Routers

The U.S. Cybersecurity and Infrastructure Security Agency (CISA) has recently included two high-risk vulnerabilities in its Known Exploited Vulnerabilities (KEV)...
Read More
CISA Flags Active Exploits in Apple iOS and TP-Link Routers

10K Records Allegedly from Mac Cloud Provider’s Customers Leaked Online

SafetyDetectives’ Cybersecurity Team discovered a public post on a clear web forum in which a threat actor claimed to have...
Read More
10K Records Allegedly from Mac Cloud Provider’s Customers Leaked Online

AI systems, unlike traditional software, behave unpredictably, depend on data quality, and are vulnerable to threats like adversarial attacks, data leakage, and model poisoning.

The new guide draws on established OWASP methodologies but is technology- and industry-agnostic, making it relevant across diverse AI deployment scenarios1.

AI testing involves more than just checking functionality. AI models, which learn from large datasets and can change over time, face unique issues like bias and drift that traditional software doesn’t. The OWASP AI Testing Guide highlights these:

Bias and Fairness Assessments: Making sure AI systems are fair and don’t lead to discrimination by checking for bias in the data.

Adversarial Robustness: Simulating attacks with crafted inputs designed to mislead or hijack models, a critical step given the susceptibility of AI to adversarial examples.

Security and Privacy Evaluations: Testing for vulnerabilities like model extraction, data leakage, and poisoning attacks, while using privacy-preserving methods like differential privacy to meet regulations.

Continuous Monitoring: Ongoing checks of data quality and model performance to identify changes or issues as AI systems work in changing environments.

The guide is designed for a wide audience like developers, architects, data analysts, researchers, and risk officers, offering practical steps for all stages of the AI product lifecycle.

It provides a strong set of tests, including data validation, fairness checks, adversarial robustness, and ongoing monitoring, helping organizations document risk validation and control.

OWASP’s method involves collaboration, starting with an expert draft that is improved with community feedback.

The project roadmap features a series of workshops, engaging interactive sessions, and a systematic update cycle, ensuring the guide remains pertinent as AI technologies and threats continue to evolve.

The goal is to promote the widespread use of strict AI testing practices to build trust in AI solutions and protect against new risks.

OWASP launches the AI Testing Guide to set a new standard for AI security, ensuring organizations can deploy AI systems while addressing vulnerabilities, biases, and performance issues.

Axentec Launches Bangladesh’s First Locally Hosted Tier-4 Cloud Platform

Check Also

SIEM and SOAR

CISA Issued Guidance for SIEM and SOAR Implementation

CISA and ACSC issued new guidance this week on how to procure, implement, and maintain …

Leave a Reply

Your email address will not be published. Required fields are marked *