Sunday , April 27 2025
Robot

U.S. Releases New AI Security Guidelines for Critical Infrastructure

DHS with Cybersecurity and Infrastructure Security Agency (CISA) have released safety and security guidelines to address AI risks that affect the safety and security of critical infrastructure systems in the US. The guidelines analyze system-level risks in three main categories.

Attacks Using AI: The use of AI to enhance, plan, or scale physical attacks on, or cyber compromises of, critical infrastructure.
Attacks Targeting AI Systems: Targeted attacks on AI systems supporting critical infrastructure.
Failures in AI Design and Implementation: Deficiencies or inadequacies in the planning, structure, implementation, or execution of an AI tool or system leading to malfunctions or other unintended consequences that affect critical infrastructure operations.

NVIDIA Releases Security Update For GPU Driver Vulnerabilities

NVIDIA has released a software security update for its GPU Display Driver to fix multiple vulnerabilities affecting both the driver...
Read More
NVIDIA Releases Security Update For GPU Driver Vulnerabilities

‘SessionShark’ ToolKit Bypasses Microsoft Office 365 MFA

The SessionShark phishing kit bypasses Office 365 MFA by stealing session tokens. Experts warn about real-time attacks using fake login...
Read More
‘SessionShark’ ToolKit Bypasses Microsoft Office 365 MFA

159 CVEs Exploited in Q1 2025 : 28.3% Within 24 Hours of Disclosure

In Q1 2025, VulnCheck identified evidence of 159 CVEs publicly disclosed for the first time as exploited in the wild....
Read More
159 CVEs Exploited in Q1 2025 : 28.3% Within 24 Hours of Disclosure

NVIDIA NeMo Framework Vuln Allow Attackers RCE

The NVIDIA NeMo Framework has three vulnerabilities that could enable attackers to execute remote code, risking AI system compromise and...
Read More
NVIDIA NeMo Framework Vuln Allow Attackers RCE

Cisco Issued Urgent Security Advisories For Multiple Products

Cisco issued a security advisory about a remote code execution (RCE) vulnerability (CVE-2025-32433) affecting multiple products in its portfolio due...
Read More
Cisco Issued Urgent Security Advisories For Multiple Products

SonicWall patched SSLVPN Vuln Allowing Firewall Crashing

SonicWall has revealed a vulnerability in its SonicOS SSLVPN Virtual Office interface that could let remote attackers crash firewall appliances....
Read More
SonicWall patched SSLVPN Vuln Allowing Firewall Crashing

GitLab Releases Security Update For Multiple Vulns

GitLab has announced a security advisory urging users to upgrade their self-managed installations right away. Versions 17.11.1, 17.10.5, and 17.9.7...
Read More
GitLab Releases Security Update For Multiple Vulns

ISPAB president “whatsapp” got hacked via phishing link

Imdadul Haque, the president of Internet Service Provider of Bangladesh (ISPAB) said, I automatically got back my WhatsApp account. What...
Read More
ISPAB president “whatsapp” got hacked via phishing link

Zyxel released patches 2 vulns in its USG FLEX H series firewalls

Zyxel Networks has issued critical security patches for two high-severity vulnerabilities in its USG FLEX H series firewalls. These flaws...
Read More
Zyxel released patches 2 vulns in its USG FLEX H series firewalls

South Korea’s largest SK Telecom Hit by Malware: SIM-related info leaked

South Korea's largest mobile operator, SK Telecom, is warning that a malware infection allowed threat actors to access sensitive USIM-related...
Read More
South Korea’s largest SK Telecom Hit by Malware: SIM-related info leaked

“CISA was pleased to lead the development of ‘Mitigating AI Risk: Safety and Security Guidelines for Critical Infrastructure Owners and Operators on behalf of DHS,” said CISA Director Jen Easterly. “Based on CISA’s expertise as National Coordinator for critical infrastructure security and resilience, DHS’ Guidelines are the agency’s first-of-its-kind cross-sector analysis of AI-specific risks to critical infrastructure sectors and will serve as a key tool to help owners and operators mitigate AI risk.”

To address these risks, DHS provides a four-part mitigation strategy based on NIST’s AI Risk Management Framework.

Govern: Establish an organizational culture of AI risk management – Prioritize and take ownership of safety and security outcomes, embrace radical transparency, and build organizational structures that make security a top business priority.

Map: Understand your individual AI use context and risk profile – Establish and understand the foundational context from which AI risks can be evaluated and mitigated.

Measure: Develop systems to assess, analyze, and track AI risks – Identify repeatable methods and metrics for measuring and monitoring AI risks and impacts.

Manage: Prioritize and act upon AI risks to safety and security – Implement and maintain identified risk management controls to maximize the benefits of AI systems while decreasing the likelihood of harmful safety and security impacts.

Countering Chemical, Biological, Radiological, and Nuclear Threats:

In collaboration with its Countering Weapons of Mass Destruction Office (CWMD), the Department conducted an analysis of the risks associated with the misuse of AI in the development or production of CBRN threats. It also provided recommended steps for mitigating these potential threats to the homeland. This comprehensive report, resulting from extensive collaboration with various stakeholders, including the U.S. Government, academia, and industry, reinforces the long-term objectives of ensuring the safe, secure, and trustworthy development and use of artificial intelligence. It also serves as a guide for potential interagency policy and implementation efforts.

“The responsible use of AI holds great promise for advancing science, solving urgent and future challenges, and improving our national security, but AI also requires that we be prepared to rapidly mitigate the misuse of AI in the development of chemical and biological threats,” said Assistant Secretary for CWMD Mary Ellen Callahan. “This report highlights the emerging nature of AI technologies, their interplay with chemical and biological research and the associated risks, and provides longer-term objectives around how to ensure safe, secure, and trustworthy development and use of AI. I am incredibly proud of our team at CMWD for this vital work which builds upon the Biden-Harris Administration’s forward-leaning Executive Order.”

Check Also

Australian Cyber Security Centre Alert for Fortinet Products

The Australian Cyber Security Centre (ACSC) has alerted technical users in both private and public …

Leave a Reply

Your email address will not be published. Required fields are marked *