🔍 Explainability
This highly requested feature is a true game-changer for understanding and trusting AI-generated outputs. Explainability goes a long way toward helping users fully understand what the AI has generated and whether they can trust it or need to make adjustments.
Assumptions
Every generated item now contains clearly laid out assumptions about the answer, insight, chart, or query. The system will tell you things like:
- "I am assuming by 'last month' you meant the 31 days of the calendar month February that just ended."
- "I am assuming 'top movies' refers to movies with highest revenue."
Assumptions are ranked by likelihood. When assumptions become too uncertain, the system will automatically ask for clarification rather than making potentially incorrect guesses.
Sources
All sources are now clearly noted and accessible. Whether the information comes from:
- Table descriptions
- Metrics from the semantic layer
- Queries from the query history
- Previous chats
- Other contextual data
Responses contain all sources as metadata, giving you direct visibility into what information was used and what was in the training data.
Chain-of-Thought (CoT) and Reasoning Tokens
Any reasoning the model has performed is returned as context for the answer. This is especially valuable for open-ended questions like "recommend relevant metrics for customer churn to me" - providing insight into why specific recommendations were made.
These data points also enable the model to interactively discuss decisions made about charts, reports, and other outputs, creating a more collaborative experience.
With these explainability features, users can now have greater confidence in their generated insights and make more informed decisions about when to accept, modify, or refine the AI's outputs.