AI Features FAQs

Prev Next

Which AI provider platforms does Traceable integrate with?

Traceable’s AI features leverage LLMs powered by Amazon Bedrock on AWS and Google Gemini on GCP. These enterprise-grade platforms ensure robust security, industry-standard compliance, and high reliability. This allows Traceable to provide the AI-driven insights while keeping your data secure and private.

Where can I see AI-generated insights?

The AI-generated insights are currently visible on the Issues Detailed View page, under the label AI-Generated Insight. For more information, see Drilling Down into an Issue.

Does Traceable share my data with third-party AI providers?

No, Traceable does not share any sensitive data with external AI providers. All AI processing is done within secure environments using EULA-compliant models that adhere to strict security and compliance standards.

Does Traceable use my data to train AI models?

No, Traceable does not use your data to train any AI model. All AI features are powered by Amazon Bedrock and Google Gemini, which provide enterprise-grade security and comply with the following standards:

  • Amazon Bedrock — Compliant with ISO 27001, SOC 1, SOC 2, SOC 3, and GDPR. Your data is not shared with the model provider and is not used to improve foundation models. For more information, see Amazon Bedrock FAQs.

  • Google Gemini — Compliant with ISO/IEC 27001, 27017, 27018, 27701, SOC 1, SOC 2, SOC 3, and GDPR. Your data is not stored persistently and is never used to train models. For more information, see Certifications and security for Gemini for Google Cloud.

Using these platforms, Traceable ensures that your data remains private and secure at all times.

How does Traceable prevent the exposure of sensitive data?

Traceable incorporates multiple layers of protection to ensure that sensitive data is never exposed during AI processing. These safeguards include:

  • Data Anonymization — Before large language models (LLMs) process any data, Traceable uses LLM guard to identify and obfuscate sensitive personal information. This ensures data privacy and industry standard compliance.

  • Access Control Enforcement — All LLM APIs are protected using role-based access control that limits access to authorized users based on their permissions within the Traceable system.

Has Traceable set rate limits to prevent misuse of the AI features?

Yes, Traceable enforces strict rate limits on all AI-powered queries to prevent misuse, abuse, and unauthorized access. These limits help ensure consistent system performance, efficient resource use, and platform stability. These limits are enforced while providing secure and fair usage of the AI features across users and teams.

Can I disable the AI features?

Yes, if you are an Account Owner or have the corresponding permissions, you can disable AI features at any time by navigating to SettingsConfigurationAI Features, and disabling the toggle corresponding to the features you wish to disable.