Groundedness scoring of model responses now Generally Available
Today, Contextual AI launched groundedness scoring for model responses.
Ensuring that agent responses are supported by retrieved knowledge is essential for RAG applications. While Contextual’s Grounded Language Models already produce highly grounded responses, groundedness scoring adds an extra layer of defense against hallucinations and factual errors.
When users query an agent with groundedness scores enabled, a specialized model automatically evaluates how well claims made in the response are supported by the knowledge. Scores are reported for individual text spans allowing for precise detection of unsupported claims. In the platform interface, the score for each text span is viewable upon hover and ungrounded claims are visually distinguished from grounded ones. Scores are also returned in the API, enabling developers to build powerful functionality with ease, like hiding ungrounded claims or adding caveats to specific sections of a response.
To get started, simply toggle “Enable Groundedness Scores” for an agent in the “Generation” section of the agent configuration page, or through the agent creation or edit API. Groundedness scores will automatically be generated and displayed in the UI, and returned as part of responses to /agent/{agent_id}/query
requests.