Trust infrastructure for
the AI economy.

Groundr is the verification layer that sits between AI models and your users. Every output is arbitrated against multiple frontier models and grounded in real-time evidence — so autonomous AI systems can act with confidence.

Verifications
Disagreements Caught
Models Tracked
AI Providers

How Groundr works

Step 01

Submit

AI agents submit claims with confidence scores.

Step 02

Ground

Evidence gathered from live web sources.

Step 03

Detect

Semantic analysis finds conflicts between models.

Step 04

Arbitrate

40/60 Rule scores evidence over confidence.

Step 05

Resolve

Winner enters Shared Reality. Reputations update.

Why Groundr?

Multi-Model Arbitration

Query multiple frontier AI models in parallel. Disagreements are detected using proprietary semantic analysis to surface conflicting claims.

Evidence-First Scoring

Model confidence is deliberately weighted below verifiable external evidence — a confident liar always loses to a sourced truth.

Hallucination Detection

If all models agree but none can provide evidence, Groundr flags a "Consensus Hallucination" — preventing snowball lies.

Truth Anchoring

High-authority sources receive a significant reliability bonus. A single truth anchor can outweigh a majority of unsupported claims.

Disagreement Map

Every conflict is logged, creating a valuable dataset that reveals exactly where and why AI models fail — your proprietary data moat.

Agent Reputation

Each AI model earns trust points for correct arbitrations and loses them for failures. Track reliability over time with temporal decay.

Connect in minutes

Option 1: Truth-Augmented Generation (New)

Generate Content via /generate

Send a prompt and receive polished, beautifully formatted content that has been invisibly arbitrated and fact-checked against live web sources. AI hallucinations are automatically corrected.

POST /generate

{
  "prompt": "Write a short essay on Bitcoin",
  "format": "essay",
  "tone": "professional",
  "max_words": 400
}

Response

Groundr returns the polished content, confidence scores, the number of models/sources used, and explicit flags if any hallucinations were intercepted and corrected.

{
  "content": "Bitcoin, a digital cryptocurrency...",
  "format": "essay",
  "grounding_status": "corrected",
  "corrections_made": [
    {
      "wrong_claim": "Bitcoin is backed by gold",
      "correct_fact": "Bitcoin is not backed by physical assets",
      "source": "https://investopedia.com/..."
    }
  ],
  "confidence_score": 0.95,
  "models_consulted": 3,
  "sources_used": 4
}

Option 2: Raw Arbitration Engine (Verify v1)

Verify with Groundr v1

Send raw model outputs and receive an action-oriented trust verdict your UI can enforce. Ideal for custom UI flows.

POST /v1/verify

{
  "query": "Does the EU Pro plan include feature X?",
  "outputs": [
    {"model": "gpt-4o", "claim": "Yes...", "confidence": 0.90},
    {"model": "claude-3-5-sonnet", "claim": "No...", "confidence": 0.86}
  ]
}

Response

Groundr returns policy-traceable outputs: action, risk, uncertainty, and atom-level evidence. Full field list in Documentation or OpenAPI.

{
  "schema_version": "1.0.0",
  "safe_to_display": false,
  "risk_level": "HIGH",
  "recommended_action": "HUMAN_REVIEW",
  "confidence_score": 0.42,
  "warnings": ["Multiple models disagree on material facts."],
  ...
}

Legacy endpoints still exist for compatibility. See Legacy API.

Core Algorithms

Effective Score Calculation

Score = 0.4 × Confidence + 0.6 × Evidence

A proprietary formula ensures evidence always outweighs raw model confidence. Unsupported claims are systematically down-ranked.

Truth Anchor Multiplier

Reliability = f(Base, Authority Bonus)

High-authority sources receive a significant bonus, creating a "gravity well" for verified truth.