Back to portfolio

March 1, 2026

TruthTeller AI

Multi-model deliberation platform that routes hard questions through an AI council, then exposes how the final answer was formed.

Multi-Model Systems
Rust
AI Platforms
View project

TruthTeller AI is a deliberation platform built to reduce single-model bias, hallucination, and overconfidence. Instead of trusting one LLM response, it makes multiple models argue, rank, and synthesise before anything reaches the user.

The Problem

Most AI products still rely on a single black-box answer. That is fast, but it leaves users with no credible way to judge disagreement, uncertainty, or provenance. TruthTeller AI treats consensus as a product feature rather than an implementation detail.

The system runs a three-stage council: independent answer generation, anonymised peer review and ranking, then final synthesis by a designated chairman model. The output is not just a final answer, but an audit trail showing how that answer emerged.

The Architecture

TruthTeller AI was designed to run in two environments without forking the product. A shared Rust core carries the domain logic, while separate web and desktop runtimes wrap that core for different deployment models.

Unified Rust core

The t2ai-core crate powers both an Axum server for the web product and a Tauri backend for the native app, keeping deliberation logic, parsing, and orchestration in one place.

Dual transport layer

The Next.js frontend auto-detects whether it is running on the web or inside Tauri, then switches between HTTP plus SSE and native IPC plus event listeners without changing the user flow.

Local-first desktop mode

In the native runtime, conversations, API keys, and extracted file content stay on the local file system, which makes the product viable for users with stronger data-sovereignty requirements.

Key Design Decisions

Blind peer evaluation

In the review stage, model identities are stripped and answers are relabelled as Response A, Response B, and so on. Rankings are parsed from text and aggregated with Kendall Tau distance to produce a defensible consensus score.

Context budgeting

The ingestion pipeline extracts text from PDF, PPTX, and DOCX files, then enforces per-file and total-context ceilings before broadcasting material to the council. That keeps runs inside model limits without failing unpredictably.

Reruns without wasted cost

Users can re-run the synthesis stages, exclude weak performers, swap the chairman model, and inspect side-by-side diffs without repeating the expensive first-pass generation stage or re-uploading source files.