Wednesday , April 2 2025
Robot

U.S. Releases New AI Security Guidelines for Critical Infrastructure

DHS with Cybersecurity and Infrastructure Security Agency (CISA) have released safety and security guidelines to address AI risks that affect the safety and security of critical infrastructure systems in the US. The guidelines analyze system-level risks in three main categories.

Attacks Using AI: The use of AI to enhance, plan, or scale physical attacks on, or cyber compromises of, critical infrastructure.
Attacks Targeting AI Systems: Targeted attacks on AI systems supporting critical infrastructure.
Failures in AI Design and Implementation: Deficiencies or inadequacies in the planning, structure, implementation, or execution of an AI tool or system leading to malfunctions or other unintended consequences that affect critical infrastructure operations.

Check Point said BreachForum post old data

Israeli cybersecurity firm Check Point has responded to a hacker who claimed to have stolen valuable information from its systems....
Read More
Check Point said BreachForum post old data

Apple Warns of 3 Zero Day Vulns Actively Exploited

Apple has issued an urgent security advisory about 3 critical zero-day vulnerabilities—CVE-2025-24200, CVE-2025-24201, and CVE-2025-24085—that are being actively exploited in...
Read More
Apple Warns of 3 Zero Day Vulns Actively Exploited

24,000 unique IP attempted to access Palo Alto GlobalProtect portals

GreyNoise has detected a sharp increase in login scanning aimed at Palo Alto Networks PAN-OS GlobalProtect portals. In the past...
Read More
24,000 unique IP attempted to access Palo Alto GlobalProtect portals

CVE-2025-1268
Patch urgently! Canon Fixes Critical Printer Driver Flaw

Canon has announced a critical security vulnerability, CVE-2025-1268, in printer drivers for its production printers, multifunction printers, and laser printers....
Read More
CVE-2025-1268  Patch urgently! Canon Fixes Critical Printer Driver Flaw

Within Minute, RamiGPT To Escalate Privilege Gaining Root Access

RamiGPT is an AI security tool that targets root accounts. Using PwnTools and OpwnAI, it quickly navigated privilege escalation scenarios...
Read More
Within Minute, RamiGPT To Escalate Privilege Gaining Root Access

Australian fintech database exposed in 27000 records

Cybersecurity researcher Jeremiah Fowler recently revealed a sensitive data exposure involving the Australian fintech company Vroom by YouX, previously known...
Read More
Australian fintech database exposed in 27000 records

Over 200 Million Info Leaked Online Allegedly Belonging to X

Safety Detectives' Cybersecurity Team found a forum post where a threat actor shared a .CSV file with over 200 million...
Read More
Over 200 Million Info Leaked Online Allegedly Belonging to X

FBI investigating cyberattack at Oracle, Bloomberg News reports

The Federal Bureau of Investigation (FBI) is probing the cyberattack at Oracle (ORCL.N), opens new tab that has led to...
Read More
FBI investigating cyberattack at Oracle, Bloomberg News reports

OpenAI Offering $100K Bounties for Critical Vulns

OpenAI has increased its maximum bug bounty payout to $100,000, up from $20,000, to encourage the discovery of critical vulnerabilities...
Read More
OpenAI Offering $100K Bounties for Critical Vulns

Splunk Alert User RCE and Data Leak Vulns

Splunk has released a security advisory about critical vulnerabilities in Splunk Enterprise and Splunk Cloud Platform. These issues could lead...
Read More
Splunk Alert User RCE and Data Leak Vulns

“CISA was pleased to lead the development of ‘Mitigating AI Risk: Safety and Security Guidelines for Critical Infrastructure Owners and Operators on behalf of DHS,” said CISA Director Jen Easterly. “Based on CISA’s expertise as National Coordinator for critical infrastructure security and resilience, DHS’ Guidelines are the agency’s first-of-its-kind cross-sector analysis of AI-specific risks to critical infrastructure sectors and will serve as a key tool to help owners and operators mitigate AI risk.”

To address these risks, DHS provides a four-part mitigation strategy based on NIST’s AI Risk Management Framework.

Govern: Establish an organizational culture of AI risk management – Prioritize and take ownership of safety and security outcomes, embrace radical transparency, and build organizational structures that make security a top business priority.

Map: Understand your individual AI use context and risk profile – Establish and understand the foundational context from which AI risks can be evaluated and mitigated.

Measure: Develop systems to assess, analyze, and track AI risks – Identify repeatable methods and metrics for measuring and monitoring AI risks and impacts.

Manage: Prioritize and act upon AI risks to safety and security – Implement and maintain identified risk management controls to maximize the benefits of AI systems while decreasing the likelihood of harmful safety and security impacts.

Countering Chemical, Biological, Radiological, and Nuclear Threats:

In collaboration with its Countering Weapons of Mass Destruction Office (CWMD), the Department conducted an analysis of the risks associated with the misuse of AI in the development or production of CBRN threats. It also provided recommended steps for mitigating these potential threats to the homeland. This comprehensive report, resulting from extensive collaboration with various stakeholders, including the U.S. Government, academia, and industry, reinforces the long-term objectives of ensuring the safe, secure, and trustworthy development and use of artificial intelligence. It also serves as a guide for potential interagency policy and implementation efforts.

“The responsible use of AI holds great promise for advancing science, solving urgent and future challenges, and improving our national security, but AI also requires that we be prepared to rapidly mitigate the misuse of AI in the development of chemical and biological threats,” said Assistant Secretary for CWMD Mary Ellen Callahan. “This report highlights the emerging nature of AI technologies, their interplay with chemical and biological research and the associated risks, and provides longer-term objectives around how to ensure safe, secure, and trustworthy development and use of AI. I am incredibly proud of our team at CMWD for this vital work which builds upon the Biden-Harris Administration’s forward-leaning Executive Order.”

Check Also

Singapore

Singapore issues new guidelines for data center and cloud services

The Infocomm Media Development Authority (IMDA of Singapore unveils advisory guidelines to reduce occurrences of …

Leave a Reply

Your email address will not be published. Required fields are marked *