Thursday , June 5 2025
Robot

U.S. Releases New AI Security Guidelines for Critical Infrastructure

DHS with Cybersecurity and Infrastructure Security Agency (CISA) have released safety and security guidelines to address AI risks that affect the safety and security of critical infrastructure systems in the US. The guidelines analyze system-level risks in three main categories.

Attacks Using AI: The use of AI to enhance, plan, or scale physical attacks on, or cyber compromises of, critical infrastructure.
Attacks Targeting AI Systems: Targeted attacks on AI systems supporting critical infrastructure.
Failures in AI Design and Implementation: Deficiencies or inadequacies in the planning, structure, implementation, or execution of an AI tool or system leading to malfunctions or other unintended consequences that affect critical infrastructure operations.

CVSS 9.6: IBM QRadar & Cloud Pak Security Flaws Exposed

IBM has issued a security advisory for vulnerabilities in its QRadar Suite Software and Cloud Pak for Security platforms. These...
Read More
CVSS 9.6: IBM QRadar & Cloud Pak Security Flaws Exposed

ALERT
Thousands of IP addresses compromised nationwide: CIRT warn

As Bangladesh prepares for the extended Eid-ul-Adha holidays, the BGD e-GOV Computer Incident Response Team (CIRT) has issued an urgent...
Read More
ALERT  Thousands of IP addresses compromised nationwide: CIRT warn

New Android Malware ‘Crocodilus’ Targets Banks in 8 Countries

In March 2025, the Threatfabric mobile Threat Intelligence team identified Crocodilus, a new Android banking Trojan designed for device takeover....
Read More
New Android Malware ‘Crocodilus’ Targets Banks in 8 Countries

Qualcomm Patches 3 Zero-Days Used in Targeted Android Attacks

Qualcomm has issued security patches for three zero-day vulnerabilities in the Adreno GPU driver, affecting many chipsets that are being...
Read More
Qualcomm Patches 3 Zero-Days Used in Targeted Android Attacks

Critical RCE Flaw Patched in Roundcube Webmail

Roundcube Webmail has fixed a critical security flaw that could enable remote code execution after authentication. Disclosed by security researcher...
Read More
Critical RCE Flaw Patched in Roundcube Webmail

Hacker claim Leak of Deloitte Source Code & GitHub Credentials

A hacker known as "303" claim to breach the company's systems and leaked sensitive internal data on a dark web...
Read More
Hacker claim Leak of Deloitte Source Code & GitHub Credentials

CISA Issued Guidance for SIEM and SOAR Implementation

CISA and ACSC issued new guidance this week on how to procure, implement, and maintain SIEM and SOAR platforms. SIEM...
Read More
CISA Issued Guidance for SIEM and SOAR Implementation

Linux flaws enable password hash theft via core dumps in Ubuntu, RHEL, Fedora

The Qualys Threat Research Unit (TRU) found two local information-disclosure vulnerabilities in Apport and systemd-coredump. Both issues are race-condition vulnerabilities....
Read More
Linux flaws enable password hash theft via core dumps in Ubuntu, RHEL, Fedora

Australia enacts mandatory ransomware payment reporting

New ransomware payment reporting rules take effect in Australia yesterday (May 30) for all organisations with an annual turnover of...
Read More
Australia enacts mandatory ransomware payment reporting

Why Govt Demands Foreign CCTV Firms to Submit Source Code?

Global makers of surveillance gear have clashed with Indian regulators in recent weeks over contentious new security rules that require...
Read More
Why Govt Demands Foreign CCTV Firms to Submit Source Code?

“CISA was pleased to lead the development of ‘Mitigating AI Risk: Safety and Security Guidelines for Critical Infrastructure Owners and Operators on behalf of DHS,” said CISA Director Jen Easterly. “Based on CISA’s expertise as National Coordinator for critical infrastructure security and resilience, DHS’ Guidelines are the agency’s first-of-its-kind cross-sector analysis of AI-specific risks to critical infrastructure sectors and will serve as a key tool to help owners and operators mitigate AI risk.”

To address these risks, DHS provides a four-part mitigation strategy based on NIST’s AI Risk Management Framework.

Govern: Establish an organizational culture of AI risk management – Prioritize and take ownership of safety and security outcomes, embrace radical transparency, and build organizational structures that make security a top business priority.

Map: Understand your individual AI use context and risk profile – Establish and understand the foundational context from which AI risks can be evaluated and mitigated.

Measure: Develop systems to assess, analyze, and track AI risks – Identify repeatable methods and metrics for measuring and monitoring AI risks and impacts.

Manage: Prioritize and act upon AI risks to safety and security – Implement and maintain identified risk management controls to maximize the benefits of AI systems while decreasing the likelihood of harmful safety and security impacts.

Countering Chemical, Biological, Radiological, and Nuclear Threats:

In collaboration with its Countering Weapons of Mass Destruction Office (CWMD), the Department conducted an analysis of the risks associated with the misuse of AI in the development or production of CBRN threats. It also provided recommended steps for mitigating these potential threats to the homeland. This comprehensive report, resulting from extensive collaboration with various stakeholders, including the U.S. Government, academia, and industry, reinforces the long-term objectives of ensuring the safe, secure, and trustworthy development and use of artificial intelligence. It also serves as a guide for potential interagency policy and implementation efforts.

“The responsible use of AI holds great promise for advancing science, solving urgent and future challenges, and improving our national security, but AI also requires that we be prepared to rapidly mitigate the misuse of AI in the development of chemical and biological threats,” said Assistant Secretary for CWMD Mary Ellen Callahan. “This report highlights the emerging nature of AI technologies, their interplay with chemical and biological research and the associated risks, and provides longer-term objectives around how to ensure safe, secure, and trustworthy development and use of AI. I am incredibly proud of our team at CMWD for this vital work which builds upon the Biden-Harris Administration’s forward-leaning Executive Order.”

Check Also

mobile

Bank server compromised using customer’s mobile, steal ₹11 crore

Cyber fraudsters hacked the Himachal Pradesh State Cooperative Bank’s server using a customer’s mobile phone. …

Leave a Reply

Your email address will not be published. Required fields are marked *