The U.S. National Institute of Standards and Technology (NIST) has launched its comprehensive Cybersecurity, Privacy, and AI program, marking a pivotal moment in how organizations must prepare for AI-infused cyber and privacy risks. NIST’s key concern is how advancements in the broad adoption of AI may impact current cybersecurity and privacy risks and risk management approaches.
This initiative aims to harmonize AI risk management with established cybersecurity and privacy standards, providing industry-tailored frameworks that help organizations navigate the complex intersection of AI innovation and security imperatives.
For data practitioners, this development translates into an urgent need to reassess their approach to data handling, model governance, and cross-functional collaboration.
Unpacking NIST’s New Cybersecurity, Privacy, and AI Program
The Cybersecurity, Privacy, and AI program provides guidance to industry, government agencies, and other organizations as they transition their cybersecurity and privacy risk management approaches to recognize the full potential of AI as well as respond to the new and modified cybersecurity and privacy risks.
What makes this initiative particularly significant is its implementation as a community profile within NIST’s Cybersecurity Framework (CSF) 2.0. NIST is developing the Cyber AI Profile based on its landmark Cybersecurity Framework, with a planned release sometime within the next six months, according to Kat Megas, the agency’s cybersecurity, privacy, and AI program manager.
Community profiles represent an adaptation feature that allows frameworks to be tailored to specific shared interest groups or technology types. They provide a way for communities to describe a consensus point of view about cybersecurity risk management and provide a shared taxonomy for cybersecurity risk management and priorities. This helps to align requirements from multiple sources and encourages common target outcomes.
The program addresses three fundamental sources of AI-related risks. First, it tackles cybersecurity and privacy risks arising from organizations’ use of AI, including securing AI systems and machine learning infrastructures while minimizing data leakage. Second, it focuses on developing defenses against AI-enabled attacks. Finally, it assists organizations in using AI for cyber defense activities and improving privacy protections.
AI’s Impact on Cybersecurity and Privacy
AI systems introduce unprecedented attack surfaces that traditional cybersecurity approaches struggle to address effectively. Machine learning models, inference engines, and AI-powered applications create unique vulnerabilities, including attacks on model weights, training data, and APIs serving AI functions. These lead to sophisticated threats such as data poisoning, model inversion, and membership inference attacks that allow adversaries to manipulate AI outputs, steal sensitive training data, or reverse-engineer proprietary model logic.
The complexity of AI supply chains compounds these vulnerabilities significantly. Modern AI systems often incorporate numerous third-party libraries, pretrained models, and cloud services, each potentially harboring hidden vulnerabilities. Without adequate security measures, AI-enabled systems become high-value targets that can inadvertently amplify cyber risk across entire organizations.
The National Security Agency’s Artificial Intelligence Security Center (AISC) has released the joint Cybersecurity Information Sheet (CSI) – AI Data Security: Best Practices for Securing Data Used to Train & Operate AI Systems – which was published in May 2025. This guidance highlights the critical role of data security in ensuring the accuracy, integrity, and trustworthiness of AI outcomes and outlines key risks that may arise from data security and integrity issues across all phases of the AI lifecycle.
The new guidance focuses on three main areas of AI data security: data drift and potentially poisoned data, and risks in the data supply chain. Data supply chain risks represent a particularly insidious threat vector, where external datasets remain vulnerable to manipulation by untrusted third parties. Compromised data can propagate through AI training pipelines, affecting future outputs and system behavior.
Maliciously modified or “poisoned” data presents another significant challenge. Threat actors may intentionally inject adversarial or false information into training sets to manipulate model behavior. This data poisoning can range from overt disinformation to subtle statistical biases, leading to inaccurate outcomes or compromised security postures.
Data drift involves sudden shifts in the statistical properties of incoming data compared to the original training datasets. While sometimes occurring naturally, data drift can degrade system accuracy over time or be exploited by malicious actors to bypass AI-driven safeguards.
Beyond cybersecurity concerns, AI creates novel privacy challenges through its analytical power across disparate datasets and potential for data leakage during model training. However, AI also presents opportunities for enhanced privacy protection and improved cyber defense capabilities.
Prioritizing Action: Implications for Data Teams
Robust Data Handling and Provenance
Organizations should ensure data used in AI training comes from trusted, reliable sources and use provenance tracking so that data can be reliably traced throughout its lifecycle. Data teams face unprecedented responsibilities in ensuring the integrity and security of AI training datasets. Validation and sanitization processes must become continuous rather than periodic, with teams monitoring training data sources for trustworthiness indicators.
Several of the best practices, such as ”source reliable data and track data provenance,” and ”verify and maintain data integrity during storage and transport,” align with the data supply chain risks discussed in the guidance. Organizations must establish comprehensive systems to track how data was obtained, processed, and modified throughout its lifecycle, requiring cryptographically signed records of data transformations.
Maintaining data integrity during storage and transport requires robust cryptographic measures. Teams should implement cryptographic hashes and checksums to detect unauthorized alterations, while all datasets must be digitally signed to prevent tampering. The adoption of quantum-resistant cryptographic standards ensures future-proofing against emerging threats.
Model Governance and System Security
The integration of AI systems into organizational infrastructure demands continuous assessment and monitoring capabilities. AI promises to be an effective tool to support and improve cybersecurity work, including how to improve data analysis or detect network anomalies. Data teams, working with security personnel, must establish mechanisms for detecting unexpected behaviors or performance drift.
AI-specific incident response procedures represent a critical gap in many organizations’ security postures. Traditional incident response playbooks must be updated to address unique AI threats such as model extraction or poisoning attacks. This aligns with the ”respond” function of the NIST CSF Core, emphasizing specialized incident response planning tailored to AI system architectures.
Controlling privileged access to training data, enforcing least privilege for both human and nonhuman identities, and continuously monitoring for anomalous behavior are all practical, achievable steps organizations can take today. Secure infrastructure and access controls become paramount when protecting AI model repositories and APIs.
Cross-Functional Collaboration and Strategic Alignment
AI security has evolved from a theoretical concern to a frontline operational imperative requiring dedicated controls, testing protocols, and significant cross-functional oversight. This transformation necessitates unprecedented collaboration among data science, IT, and cybersecurity teams, breaking down traditional silos that can leave organizations vulnerable.
Data asset re-evaluation becomes essential as AI systems change the relative importance and risk profiles of organizational data. Cybersecurity practitioners and data teams must work together to update data asset inventories, accounting for new threats and risks introduced by AI capabilities. This effort aligns with the ”identify” function of the NIST CSF Core.
Organizations must adapt their defensive strategies to address AI-enabled cyberattacks effectively. This includes updating security awareness training programs to address emerging threats such as AI-powered voice generators used for sophisticated phishing campaigns.
Navigating Challenges and Embracing Continuous Adaptation
The implementation of comprehensive AI security frameworks faces significant complexity challenges. Real-world Zero Trust implementations often involve multiple policy decision and enforcement points, which, if omitted from planning processes, can leave organizations vulnerable to sophisticated attacks.
Organizations can leverage the NIST Cybersecurity Framework Implementation Tiers to assess their current cybersecurity maturity and guide their journey toward enhanced AI security. These tiers provide a structured approach for organizations to evaluate their progress from limited risk awareness to formalized, integrated, and dynamic cybersecurity practices.
The rapidly evolving threat landscape for AI systems necessitates continuous risk assessments and adaptive security strategies. Cybersecurity in the age of AI requires ongoing research, development, and deployment of cutting-edge technologies to stay ahead of emerging threats.
Conclusion: Forging a Secure AI Future
NIST’s new Cybersecurity, Privacy, and AI guidance represents a critical milestone in integrating artificial intelligence into secure organizational practices. For data teams, this translates into an imperative to adopt robust data handling and provenance practices, implement comprehensive model governance, and engage in meaningful cross-functional collaboration.
Prioritizing these areas and embracing continuous adaptation will help organizations successfully navigate the complexities of AI-infused cyber and privacy risks, fostering a secure and trustworthy AI ecosystem that aligns with established cybersecurity standards while ensuring long-term organizational resilience.

