Vulnerability Disclosure Policy (VDP)
Last updated: Jan 2026
1) Purpose
Contextual AI, Inc. is committed to protecting our customers and our platform, and we recognize the vital role that independent security researchers play in the ecosystem. We welcome reports from security researchers and the community to help identify and resolve security vulnerabilities responsibly. This policy aims to give security researchers clear guidelines for conducting vulnerability discovery activities and for submitting discovered vulnerabilities to us.
2) Scope
This policy covers security vulnerabilities in Contextual AI’s products and services only when:
- The issue is new, original, and not previously reported, and not already detected through internal processes.
Domains in scope:
- contextual.ai
- app.contextual.ai
- api.app.contextual.ai
- eu.contextual.ai
- api.eu.contextual.ai
Researchers should not submit the following:
- High-volume / traffic flooding issues (availability attacks that overwhelm the service)
- TLS configuration observations (e.g., older protocol support or weak ciphers)
- Findings that cannot realistically be exploited“Best practice” gaps like missing security headers or email configuration issues (SPF/DMARC, etc.)
3) Rules of engagement
Researchers should:
- Avoid privacy violations and do not intentionally access, modify, delete, or exfiltrate data that is not yours.
- Use the minimum level of testing needed to confirm a vulnerability exists.
- Stop testing immediately if you encounter sensitive data (e.g., personal data, credentials, tokens, financial data) and report what you observed.
- Do not perform testing that degrades service availability or user experience.
Prohibited activities (not authorized):
Researchers shall not engage in the following:
- Denial of service (DoS/DDoS),
- Volumetric scanning
- Impacts to service availability
- Resource exhaustion
- Social engineering
- Phishing
- Vishing
- Threats
- Extortion
- Harassment
- Physical security pentesting
- Tailgating
- Facility access attempts
- Cloning or forging access cards
- Malware delivery, persistence, or pivoting to other systems
- Testing against out-of-scope targets
- Testing against Contextual AI customer deployments
- Testing against Contextual AI data sub-processors, cloud providers
- Unauthorized Red Teaming exercises
- Commercial solicitation for security products
- Reverse Engineering of protected software or intellectual property
5) How to report a vulnerability
Please submit all reports through Bugcrowd using the submissions form provided below. This ensures proper tracking, triage, and secure communication.
To help us triage quickly, include:
- A clear description of the issue and security impact
- Steps to reproduce (PoC, screenshots/videos where helpful)
- Affected endpoint(s), URLs, parameters, request/response samples
- Environment details (browser/app version), and any suggested remediation
- Whether the issue is reproducible and any limitations/conditions
6) What you can expect from us
If you provide your contact information via Bugcrowd, we will:
- Acknowledge receipt of your report within the same business day
- Work to validate and remediate confirmed issues as quickly as practical
- Communicate with you through Bugcrowd regarding status and questions
7) Rewards and recognition
This is a Vulnerability Disclosure Program, and unfortunately, we do not offer monetary reward at this time.
8) Legalities and Safe Harbor
- This policy does not grant permission to violate any law or access data you do not own or have permission to access.
- Do not share vulnerability details, PoCs, screenshots, or videos publicly (e.g., public links) unless explicitly approved by Contextual AI and appropriately secured.
- If you make a good-faith effort to follow this policy, we will treat your research as authorized and will not initiate or recommend legal action against you for your security research activities. Such safe harbor does not apply to actions that exceed the scope of this policy, violate any laws, or are conducted with malicious intent.
- Contextual AI may update this policy at any time. Continued participation in vulnerability research activities after an update constitutes your acceptance of the updated policy.
9) Questions
For program questions, scope clarifications, or non-sensitive inquiries, use the Bugcrowd program messaging channel. Optionally, you may also contact: security@contextual.ai if you have any questions.