The Contextual AI Platform, powered by RAG 2.0
Easily build specialized RAG agents that your enterprise can trust
The end-to-end platform for building specialized RAG agents
- Deliver greater ROI than traditional AI copilots by addressing higher-value use cases
- Support subject matter experts in domain-specific knowledge work
- Retrieve and reason over massive volumes of unstructured and structured data
- Achieve superior accuracy than traditional RAG with jointly optimized RAG components
- Maintain retrieval performance at scale across complex, noisy enterprise data
- Specialize components, as part of a unified system, to precisely address your use case
Platform Capabilities
Achieve production-grade accuracy for any use case
Meet the stringent accuracy requirements needed to move your specialized RAG agents from demo to production
Mixture-of-retrievers approach and SOTA reranker to retrieve and reason over text, images, charts, & other complex data sources
Iterative retrieval and reasoning chains to sharpen accuracy for complex tasks
Stable retrieval performance in real-world deployments with massive volumes of noisy enterprise data
Language models grounded in retrieved data to improve accuracy and reduce hallucinations
Tools to specialize RAG agents for the most complex and knowledge-intensive use cases
Reason over unstructured and structured data
Continuously ingest, extract, and retrieve your most important enterprise data—regardless of its scale, noisiness, or format
Support for unstructured data sources like PDFs and HTML with rich media (e.g., images, charts, figures, tables, code)
Support for structured data sources like data warehouses, databases, and spreadsheets
Pre-built integrations to popular SaaS applications like Slack, Google Drive, Github, and more
Maximize end-user trust and confidence
Provide end-users with clear attributions to relevant, up-to-date data sources and protections against potential hallucinations
Precise citations to retrieved documents with bounding boxes to highlight relevant data to user
Automated flagging of potential hallucinations with low groundedness
Automated and ongoing ingestion of new data to ensure response timeliness
Built-in evaluation tools to assess responses for equivalence and groundedness
Meet robust enterprise security requirements
Deploy to production safely and confidently with a comprehensive suite of enterprise-grade security features
SOC 2 certified to ensure enterprise data is properly secured and protected
Role-based access controls to ensure responses are only grounded in data that is accessible to the user
In-transit and at-rest encryption to protect sensitive data
Protections to ensure output is safe, accurate, appropriate, and aligned with customer brand and content guidelines
Deploy in our cloud or yours
Leverage a fully managed, highly secure SaaS offering on Contextual AI infrastructure
Deploy within your virtual private cloud
Deploy within your on-prem environment
Powerful APIs for the entire agent development lifecycle
# Create an agent
import os
from contextual import ContextualAI
client = ContextualAI(
api_key=os.environ.get("CONTEXTUAL_API_KEY"),
)
create_agent_output = client.agents.create(
name="agent_name",
)
print(create_agent_output.id)
# Create a datastore
import os
from contextual import ContextualAI
client = ContextualAI(
api_key=os.environ.get("CONTEXTUAL_API_KEY"),
)
create_datastore_response = client.datastores.create(
name="datastore_name",
)
print(create_datastore_response.id)
# Query an agent
import os
from contextual import ContextualAI
client = ContextualAI(
api_key=os.environ.get("CONTEXTUAL_API_KEY"),
)
query_response = client.agents.query.create(
agent_id="182bd5e5-6e1a-4fe4-a799-aa6d9a6ab26e",
messages=[{
"content": "content",
"role": "user",
}],
)
print(query_response.message_id)
print(query_response.message)
# Create a tune job
import os
from contextual import ContextualAI
client = ContextualAI(
api_key=os.environ.get("CONTEXTUAL_API_KEY"),
)
with open("path/to/your/training_file.csv", "rb") as file:
training_file_contents = file.read()
tune_response = client.agents.tune.create(
agent_id="182bd5e5-6e1a-4fe4-a799-aa6d9a6ab26e",
training_file=training_file_contents,
)
print("Tune job ID:", tune_response.id)
# Create an evaluation round
import os
from contextual import ContextualAI
client = ContextualAI(
api_key=os.environ.get("CONTEXTUAL_API_KEY"),
)
launch_evaluation_response = client.agents.evaluate.create(
agent_id="182bd5e5-6e1a-4fe4-a799-aa6d9a6ab26e",
evalset_name="your_dataset_name",
metrics=["equivalence"],
)
print(launch_evaluation_response.id)
# Create a dataset for tuning and eval
import os
from contextual import ContextualAI
client = ContextualAI(
api_key=os.environ.get("CONTEXTUAL_API_KEY"),
)
with open("path/to/your/dataset_file.csv", "rb") as file:
dataset_file_contents = file.read()
create_dataset_response = client.agents.datasets.evaluate.create(
agent_id="182bd5e5-6e1a-4fe4-a799-aa6d9a6ab26e",
dataset_name="your_dataset_name",
dataset_type="evaluation_set",
file=dataset_file_contents,
)
print("Dataset name:", create_dataset_response.name)