Documentation Index
Fetch the complete documentation index at: https://agno-v2-team-approvals.mintlify.app/llms.txt
Use this file to discover all available pages before exploring further.
Dash starts useful on the synthetic SaaS data and gets sharper as you give it your own. Most of that work is curating knowledge and scheduling proactive runs.
Point Dash at your own data
The synthetic SaaS dataset is a starting point so you can play with Dash before swapping in real data. To use your own:
- Replace the data loader. Either rewrite
scripts/generate_data.py or pg_restore directly into the public schema.
- Rewrite knowledge. Update
knowledge/tables/ for your schemas, knowledge/queries/ for proven SQL, knowledge/business/ for your definitions and gotchas.
- Reload:
docker exec -it dash-api python scripts/load_knowledge.py --recreate.
The Engineer will start building reusable views in the dash schema as it works (dash.monthly_mrr, dash.customer_health_score). The Analyst discovers and prefers those over re-querying raw tables.
Add knowledge layers
Three kinds of knowledge feed Dash. The Dash README walks through each with examples:
| Layer | What it is | Where in repo |
|---|
| Table metadata | Column meanings, value enums, gotchas | knowledge/tables/*.json |
| Query patterns | Tested SQL the Analyst can adapt | knowledge/queries/*.sql |
| Business rules | Metric definitions, common pitfalls | knowledge/business/*.json |
After editing, reload:
docker exec -it dash-api python scripts/load_knowledge.py # upsert
docker exec -it dash-api python scripts/load_knowledge.py --recreate # fresh start
The fastest way to get Dash performing well is to feed it the queries your team already trusts. Each one reduces the surface area where the model has to invent SQL.
Schedule proactive runs
A useful data agent posts on its own. Morning MRR digest. Alerts when churn drifts. Weekly summary into Slack.
See Scheduling for the patterns. The Coda template is the working example to copy: it registers daily digest, issue triage, and repo sync schedules in app/main.py. The same pattern works for Dash.
Run evals
Dash ships with five eval categories. Use them to track quality as you change knowledge or models:
python -m evals # all evals
python -m evals --category accuracy # one category
python -m evals --verbose # show full responses
| Category | Tests |
|---|
accuracy | Correct data and meaningful insights |
routing | Team routes to the right agent and tools |
security | No credential or secret leaks |
governance | Refuses destructive SQL operations |
boundaries | Schema access boundaries respected |
Add your own cases as you discover them. Run evals before each deploy to catch regressions.
Going deeper
| To learn | See |
|---|
| The team architecture | dash/team.py and dash/agents/ |
| The inspiration | OpenAI’s in-house data agent |
| Knowledge in Agno generally | Knowledge |
| Comparable templates | Scout, Coda |
| Building a fully custom AgentOS app | Build a Product |