AI is increasingly being plugged as the answer to all modern security challenges. Enterprises are increasingly using AI to fight fraud, identify anomalies, protect APIs, filter out phishing attempts, and even react to potential threats. In many corporate suites, these systems are plugged as the intelligent shield that protects enterprise infrastructures.
There’s a dangerous irony infusing in all this.
As enterprises increasingly embed AI into their infrastructures, AI itself becomes a valuable target for attackers.
The conversation is shifting from “How can AI protect us?” to a more urgent question: Who is protecting the AI?
From an enterprise architecture perspective, the biggest mistake organizations make is assuming AI behaves like traditional deterministic software.
Free AI Governance Webinars
Join us for a free upcoming webinar on AI governance challenges, trends, best practices, use cases, and more.
AI Is No Longer a Tool – It’s Infrastructure
AI is no longer a secondary technology used for testing and experimentation or as a tool for simple automation. Instead, AI is a fundamental component of the modern enterprise infrastructure and drives the business directly. Although the initial purpose of AI in the business environment was as experimental software used in analytics and development, today AI is used in critical business processes.
AI makes autonomous decisions in the business environment, including financial and operational decisions, and converses with the company’s systems using Application Programming Interfaces. Furthermore, AI agents in the business environment have access to contextual customer information.
This is due to the high level of integration, whereby today’s AI systems are capable of handling sensitive information such as customers’ data, financial information, and business knowledge. The information provided by the AI has a direct impact on the business, financial success, and customers. The AI systems are also integrated with distributed architecture and have exposed the inference API, thus allowing other applications to communicate with the AI programmatically. Therefore, the role of AI has shifted from a research-based technology to a business-critical solution.
The New Threat Landscape: AI-Specific Attack Vectors
Traditionally, security models were not built with probabilistic, learning-based systems in mind. In fact, AI postures never-before-seen types of risk.
1. Prompt Injection Attacks
Large language models (LLMs) are also very susceptible to prompt injection attacks, where the attacker seeks to subvert the security measures embedded in the model by manipulating the input prompts to the model. Instead of relying on the conventional software vulnerabilities, the attacker seeks to manipulate the model’s interpretation of the prompts. For instance, the attacker could include hidden prompts in the uploaded documents or user input, which the model would unknowingly execute. In some cases, the attacker could even manipulate the model to disclose certain prompts, such as system prompts, policies, and other restricted information. Additionally, the attacker could seek to subvert the safety measures to ensure the model accesses certain restricted information.
Unlike traditional injection attacks, such as SQL injection, this particular threat does not involve an exploit of code execution. Rather, it involves an exploit of the model’s reasoning. As was stated, these models are not broken; they are simply following their programming and their training. However, as these are intended to be beneficial and responsive to instructions, they can also be tricked by carefully constructed prompts. Thus, the model does behave as intended, but with potentially detrimental consequences.
2. Data Poisoning
This means that the integrity of machine learning models is directly dependent on the data that is used to improve the models. Therefore, if attackers are able to gain access to the critical parts of the data flow, such as the training pipelines, feedback loops, user-generated data, and reinforcement learning, they are able to feed the machine learning models with data that is meant to mislead the models. This is called data poisoning, and attackers are able to influence the machine learning models in a subtle way.
When compromised, the effect can be substantial and hard to identify. For instance, a fraud detection model can start to slowly cease identifying specific kinds of fraud, thus allowing fraudsters to evade detection. A security detection model can also be compromised so that it does not identify particular kinds of attacks. A recommendation system can also be compromised so that it slightly changes its results with the intention of manipulating user behavior. Unlike traditional attacks that are easily identifiable as they make systems malfunction or even crash, data poisoning attacks are different in that they slowly change the model’s behavior, with the system running as expected. Because of this, these attacks can go unnoticed for a long time, making them dangerous and hard to identify.
3. Model Inversion and Data Extraction
The AI models have the ability to learn and memorize the data they are trained with. Advanced hackers have the ability to use this to their advantage by repeatedly asking the AI model the same question and, based on the pattern of the answer, slowly start to reverse-engineer the characteristics of the data that the model has been trained with. It is possible for the hackers to retrieve the data that has been embedded with the AI model’s learned patterns. In such cases, the AI model acts as a data leakage vector.
4. Adversarial Examples
Adversarial attacks, as their name suggests, are based on the vulnerability of AI models. These attacks involve making minor, often imperceptible, changes to the data that can affect the output of the AI model. These changes can affect image classification systems, allowing malware to evade AI system detection, and even allowing malicious activities to evade anomaly detection. Although the model may seem to be functioning perfectly, it can still be quite vulnerable.
Adversarial attacks are often based on imperceptible changes that can affect AI System models. These changes can affect various AI models, including malware detection, image classification, and more. Although AI models may seem to be functioning perfectly, they can still be quite vulnerable.
5. AI API Exploitation
Typically, modern AI systems are accessible via APIs, which, as discussed, also creates security issues. There are various attacks that can be launched by an attacker, including abuse of inference, model enumeration, denial of service, and analysis of response patterns to obtain characteristics of system behavior. As discussed, due to these security issues, AI systems must be treated like any other public application. Therefore, appropriate security measures must be put in place, including API gateway security, to ensure that security and reliability are maintained.
Learn, Improve, Succeed
Get access to dozens of courses and conference sessions with our Essential Subscription.
Use code ESSENTIAL50 for 50% off through March 31
The Enterprise Blind Spot
For most enterprises, security has traditionally focused on protecting the core assets such as databases, network perimeters, application layers, and identity and access management. However, most enterprises do not realize the importance of protecting other critical assets powering the AI systems. Assets such as weights, templates, feature engineering, and datasets are not typically considered security assets, although they may carry sensitive and crucial knowledge and logic for the enterprise.
For most enterprises, the AI system often finds itself positioned between the application logic layer and the data layer. By positioning itself between the above-mentioned layers, the AI system has the ability to see both the business logic that drives the applications and the data that drives the insights and decisions for the enterprise. If an attacker gains access to the AI system, it is likely that he/she may gain access indirectly to both the above-mentioned layers, making it a powerful and underestimated attack surface for most enterprises.
The Governance Gap
There is also a big governance gap that exists with organizations that are using AI systems. Most traditional software systems are deterministic, meaning that their logic can be followed by software engineers. However, with AI, their logic cannot be followed, as they are probabilistic by nature. These are just a few of the governance issues that arise with deterministic and probabilistic logic. There are also issues of explaining their decisions, complexity in performing audits, ambiguity with regulations, and complexity in constantly monitoring their behavior.
When an AI system does something undesirable, we cannot pinpoint a particular line of code or logic that may have caused the undesirable outcome. There are various reasons why an AI system may behave in a particular way. These reasons include biased data, changes that may occur over time, changes that may occur in the data, and also interactions between features that may exist in the model. These are just a few of the issues that may arise with AI, and they are complex. These complexities are what an attacker can also use, as a system that cannot be easily monitored cannot also be easily defended.
Production AI: The Risk Multiplier
While organizations are using AI systems to rapidly deploy the benefits of AI to gain a competitive advantage, the reality is that when the AI system enters the production environment, the risk of the AI system increases manifold. The production environment of the AI system is a risk multiplier. This is because the AI system works on a large scale, works independently to a large degree, processes sensitive or confidential information, and often works across a distributed cloud environment. So, the moment there is a problem, the problem will amplify itself across the AI system, affecting a large number of users simultaneously.
Let’s consider a few possible scenarios. The AI system for detecting fraudulent transactions might be compromised or not trained properly. It might end up allowing fraudulent transactions to go through. The AI system for the DevSecOps process might not be able to detect critical vulnerabilities in the code. It might allow the deployment of software into the production environment. The AI system for the chatbot might end up revealing the internal architecture of the system to the user. The AI system for pricing might be used to reveal the pricing logic of the organization to the competitor.
Securing AI by Design
If AI is to become an integral part of the infrastructure, it too needs to be secured with the same level of concern as traditional infrastructure systems. To do this, the security of the AI system needs to be embedded throughout the entire life cycle of the AI, from the ingestion of the data to the training of the model, and through to the deployment of the system, to ensure the system remains resilient to misuse, manipulation, and operational failures.
1. Secure Data Pipelines
Data pipelines are the foundation of the AI system, and therefore they need to be secured with the same level of concern. To do this, the organization needs to ensure the data being piped into the systems are coming from a trusted source, monitor the data distribution to ensure anomalies are not present, implement strict access controls to prevent the data from being manipulated, and ensure the data’s integrity is such that any changes to the data are easily flagged.
2. Protect Model Artifacts
The model artifacts, which include the trained model’s weights and configuration, are important intellectual properties that need to be secured. The model artifacts need to be encrypted to prevent unauthorized access, and the repository holding the model has to ensure robust authentication and access control. Additionally, the system has to monitor for any suspicious activity, which could imply the model has been tampered with.
3. Harden Inference APIs
The AI models are normally accessed through the inference APIs, and this makes them a potential entry point for the attacker. To ensure the APIs are secure, the organization has to implement robust authentication, monitor for unusual access patterns that could imply the model is being abused, and implement logging to ensure that the suspicious activity can be traced.
4. Prompt Security Controls
When implementing prompt security controls in a system that relies on a large language model, the following considerations are relevant: the input prompt must be sanitized to remove any malicious or manipulative instructions, the context data must be isolated to avoid any information leakage, and the output data must be filtered to ensure compliance with safety and policy requirements.
5. Model Monitoring
After the AI models are integrated into the system, the models must be monitored for any changes in their behavior to ensure that the models are functioning as expected. The monitoring of the models should include the following considerations: the models should be monitored for any changes in the input data distribution, the performance of the models should be monitored over time, the models should be monitored for any bias that could emerge in the predictions, and the models should be integrated into the threat modeling of the organization as a whole, as the models are a crucial part of the enterprise system.
The Cultural Shift: AI Is Not Magic
One of the biggest dangers of AI is a psychological threat rather than a technical threat. Real-world examples of such issues, such as prompt injection attacks on LLM copilots and model extraction attacks on public APIs, show that the trust in AI systems can lead to new security issues. There are a number of security issues surrounding AI, and the problem is not necessarily due to the AI, but due to the fact that people trust these systems too much. To resolve this problem, there is a need to add a human-in-the-loop component as well as an explainability component.
AI as Both Defender and Target
Modern enterprise architecture has created a paradox when it comes to AI system, where it is used for security purposes such as fraud detection, API monitoring, security for the development pipeline, and automation for threat response. At the same time, the same AI system is often made available through APIs, uses external data sources for training, and is frequently updated and integrated throughout the enterprise architecture. This has created a paradox where AI system has become both a security shield and an entry point for hackers within modern enterprise architecture.
In telecom environments, fraud detection models process thousands of events per second through API gateways, making them high-value targets.
The Strategic Imperative
As the adoption of AI is speeding up in various sectors such as telecommunication, finance, healthcare, and retail, the successful organizations are those that are recognizing AI as a security-critical system rather than recognizing it as a mere feature. This means that the question of whether or not to adopt AI is no longer in question. Rather, the question is how the AI is being secured. As soon as the AI is being embedded into the infrastructure of the decision-making process of the enterprise, it inherits the risk profile of every system it is interacting with.
Final Thoughts
The real risk is not that AI replaces humans. The real risk is that organizations deploy AI into production faster than they can secure it. Once AI is part of the enterprise’s decision-making infrastructure, it inherits the risk profile of every system it touches. That makes AI not only a tool, but also a new attack surface.
Live Online Course: Data Architecture Intensive
Learn how to design, assess, and evolve your architecture to meet current and future demands.


