Tuesday , April 1 2025
office

Microsoft released PyRIT, A Tool For Generative AI Systems

Microsoft has released a new open automation framework called PyRIT (Python Risk Identification Toolkit). It helps security professionals and machine learning engineers identify and reduce risks in generative models.

The need for automation in AI Red Teaming:

CVE-2025-1268
Patch urgently! Canon Fixes Critical Printer Driver Flaw

Canon has announced a critical security vulnerability, CVE-2025-1268, in printer drivers for its production printers, multifunction printers, and laser printers....
Read More
CVE-2025-1268  Patch urgently! Canon Fixes Critical Printer Driver Flaw

Within Minute, RamiGPT To Escalate Privilege Gaining Root Access

RamiGPT is an AI security tool that targets root accounts. Using PwnTools and OpwnAI, it quickly navigated privilege escalation scenarios...
Read More
Within Minute, RamiGPT To Escalate Privilege Gaining Root Access

Australian fintech database exposed in 27000 records

Cybersecurity researcher Jeremiah Fowler recently revealed a sensitive data exposure involving the Australian fintech company Vroom by YouX, previously known...
Read More
Australian fintech database exposed in 27000 records

Over 200 Million Info Leaked Online Allegedly Belonging to X

Safety Detectives' Cybersecurity Team found a forum post where a threat actor shared a .CSV file with over 200 million...
Read More
Over 200 Million Info Leaked Online Allegedly Belonging to X

FBI investigating cyberattack at Oracle, Bloomberg News reports

The Federal Bureau of Investigation (FBI) is probing the cyberattack at Oracle (ORCL.N), opens new tab that has led to...
Read More
FBI investigating cyberattack at Oracle, Bloomberg News reports

OpenAI Offering $100K Bounties for Critical Vulns

OpenAI has increased its maximum bug bounty payout to $100,000, up from $20,000, to encourage the discovery of critical vulnerabilities...
Read More
OpenAI Offering $100K Bounties for Critical Vulns

Splunk Alert User RCE and Data Leak Vulns

Splunk has released a security advisory about critical vulnerabilities in Splunk Enterprise and Splunk Cloud Platform. These issues could lead...
Read More
Splunk Alert User RCE and Data Leak Vulns

CIRT alert Situational Awareness for Eid Holidays

As the Eid holidays near, cybercriminals may try to take advantage of weakened security during this time. The CTI unit...
Read More
CIRT alert Situational Awareness for Eid Holidays

Cyberattack on Malaysian airports: PM rejected $10 million ransom

Operations at Kuala Lumpur International Airport (KLIA) were unaffected by a cyber attack in which hackers demanded US$10 million (S$13.4...
Read More
Cyberattack on Malaysian airports: PM rejected $10 million ransom

Micropatches released for Windows zero-day leaking NTLM hashes

Unofficial patches are available for a new Windows zero-day vulnerability that allows remote attackers to steal NTLM credentials by deceiving...
Read More
Micropatches released for Windows zero-day leaking NTLM hashes

Red teaming AI systems is complex. Microsoft’s AI Red Team consists of experts in security, adversarial machine learning, and responsible AI. They utilize resources from the Fairness center, AETHER, and the Office of Responsible AI. The goal is to identify and measure AI risks and develop mitigations to minimize them.

PyRIT for generative AI Red teaming:

PyRIT was tested by the Microsoft AI Red Team. It was initially a set of one-off scripts used for testing generative AI systems in 2022. As they tested different types of generative AI systems and looked for various risks, they added new features. Now, PyRIT is a reliable tool in the Microsoft AI Red Team’s toolkit.

Source: Microsoft

Microsoft found a major advantage in using PyRIT: efficiency. For example, during a red teaming exercise on a Copilot system, we were able to select a category, create thousands of malicious prompts, and use PyRIT’s scoring engine to evaluate the system’s output in just a few hours instead of weeks.

PyRIT is not meant to replace manual red teaming of generative AI systems, but rather to enhance an AI red teamer’s expertise and automate tedious tasks. It helps identify potential risks that can be further investigated by the security professional. The professional maintains control over the strategy and execution of the AI red team operation, while PyRIT automates the process of generating harmful prompts using the initial dataset provided by the professional.

PyRIT is not just a prompt generation tool. It adapts its approach based on the AI system’s response and generates the next input. This process continues until the security professional’s goal is reached. click here to read out the full report.

 

 

Check Also

Singapore

Singapore issues new guidelines for data center and cloud services

The Infocomm Media Development Authority (IMDA of Singapore unveils advisory guidelines to reduce occurrences of …

Leave a Reply

Your email address will not be published. Required fields are marked *