AI Security Testing (Beta)

Prev Next

AI Security Testing is an AST capability that enables you to automatically discover, test, and secure AI-powered APIs and large language model (LLM) endpoints. The built-in AI Scan Policy available under Testing → Settings → Policies identifies and validates AI endpoints against known risks.

Built on the OWASP Top 10 for AI framework, the policy uses Traceable’s test plugins to simulate real-world AI and LLM attack scenarios. These tests help you understand how your AI endpoints handle prompts, data, and responses, ensuring that your AI applications stay secure and compliant.

What will you learn in this topic

By the end of this topic, you will understand:

  • The concept of testing AI and LLM endpoints.

  • How Traceable detects and classifies AI issues using the test plugins.

  • The evidence and insights Traceable provides for each issue for faster remediation.


Understanding AI scans

Traceable provides the AI Scan policy that automatically detects issues in AI endpoints discovered under the API Catalog. This policy runs pre-defined test plugins categorized under AI that cover 16 attack types based on the OWASP Top 10 for AI, such as Prompt Injection, Sensitive Data Disclosure, and AI SQL Injection. You can use this policy while selecting or creating attacks during scan creation.

AI Scan Policy Selection

AI Scan Policy Selection

Each issue discovered is assigned a severity level to help you prioritize remediation. Traceable displays the detected issues under Testing → Issues → Filter → Category: AI, where you can review your issue findings and the corresponding evidence.


Actionable AI insights

Each AI issue includes an AI-generated insight summary that explains why the issue occurred, how you can fix it, and how to prevent it from occurring again. These summaries turn complex findings into clear, developer-friendly recommendations for faster and accurate remediation.

AI Issue Insights

AI Issue Insights


Issue findings and evidence

For each AI issue Traceable discovers, it displays evidence that includes real prompt-response transcripts between the test agent and the LLM, which shows exactly how the issue was triggered. The evidence is further mapped to OWASP LLM Top 10 categories and includes the trace visibility for better validation.

AI Issue Evidence

AI Issue Evidence

This information helps you and your security teams quickly understand what went wrong, replicate the condition, if needed, and work towards its remediation.