Groundr is the verification layer that sits between AI models and your users. Every output is arbitrated against multiple frontier models and grounded in real-time evidence — so autonomous AI systems can act with confidence.
AI agents submit claims with confidence scores.
Evidence gathered from live web sources.
Semantic analysis finds conflicts between models.
40/60 Rule scores evidence over confidence.
Winner enters Shared Reality. Reputations update.
Query multiple frontier AI models in parallel. Disagreements are detected using proprietary semantic analysis to surface conflicting claims.
Model confidence is deliberately weighted below verifiable external evidence — a confident liar always loses to a sourced truth.
If all models agree but none can provide evidence, Groundr flags a "Consensus Hallucination" — preventing snowball lies.
High-authority sources receive a significant reliability bonus. A single truth anchor can outweigh a majority of unsupported claims.
Every conflict is logged, creating a valuable dataset that reveals exactly where and why AI models fail — your proprietary data moat.
Each AI model earns trust points for correct arbitrations and loses them for failures. Track reliability over time with temporal decay.
Send a prompt and receive polished, beautifully formatted content that has been invisibly arbitrated and fact-checked against live web sources. AI hallucinations are automatically corrected.
POST /generate
{
"prompt": "Write a short essay on Bitcoin",
"format": "essay",
"tone": "professional",
"max_words": 400
}
Groundr returns the polished content, confidence scores, the number of models/sources used, and explicit flags if any hallucinations were intercepted and corrected.
{
"content": "Bitcoin, a digital cryptocurrency...",
"format": "essay",
"grounding_status": "corrected",
"corrections_made": [
{
"wrong_claim": "Bitcoin is backed by gold",
"correct_fact": "Bitcoin is not backed by physical assets",
"source": "https://investopedia.com/..."
}
],
"confidence_score": 0.95,
"models_consulted": 3,
"sources_used": 4
}
Send raw model outputs and receive an action-oriented trust verdict your UI can enforce. Ideal for custom UI flows.
POST /v1/verify
{
"query": "Does the EU Pro plan include feature X?",
"outputs": [
{"model": "gpt-4o", "claim": "Yes...", "confidence": 0.90},
{"model": "claude-3-5-sonnet", "claim": "No...", "confidence": 0.86}
]
}
Groundr returns policy-traceable outputs: action, risk, uncertainty, and atom-level evidence. Full field list in Documentation or OpenAPI.
{
"schema_version": "1.0.0",
"safe_to_display": false,
"risk_level": "HIGH",
"recommended_action": "HUMAN_REVIEW",
"confidence_score": 0.42,
"warnings": ["Multiple models disagree on material facts."],
...
}
Legacy endpoints still exist for compatibility. See Legacy API.
A proprietary formula ensures evidence always outweighs raw model confidence. Unsupported claims are systematically down-ranked.
High-authority sources receive a significant bonus, creating a "gravity well" for verified truth.