AletheionGuard API v1.1.0

Epistemic auditor for LLM outputs with Managed and BYO-HF modes

OAS 3.1 | /openapi.json

Base URL

https://aletheionguard.onrender.com

All API requests should be made to this base URL. Check https://aletheionguard.onrender.com/docs for interactive documentation.

Core Endpoints

GET/

Root endpoint.

GET/health

Health check endpoint. No authentication required.

Response (200 OK)

{
"status": "string",
"version": "string",
"uptime_seconds": 0
}
POST/v1/audit

Audit a single LLM response. Supports both Managed and BYO-HF modes:

  • Managed mode: Uses default HF_ENDPOINT_URL and HF_TOKEN from env
  • BYO-HF mode: Uses X-HF-Token and X-HF-Endpoint headers from client

Headers (Optional - BYO-HF mode)

X-HF-Token: Your Hugging Face token
X-HF-Endpoint: Your Hugging Face endpoint URL

Request Body

{
"text": "string",
"context": "string",
"model_source": "string"
}

Response (200 OK)

{
"q1": 0,
"q2": 0,
"height": 0,
"ece": 0,
"brier": 0,
"verdict": "string",
"confidence_interval": [
0
],
"explanation": "string",
"metadata": {
"additionalProp1": {}
},
"mode": "string",
"upstream_latency_ms": 0
}

Response (422 Validation Error)

{
"detail": [
{
"loc": [
"string",
0
],
"msg": "string",
"type": "string"
}
]
}
Raises: HTTPException 400 if invalid endpoint, 502 if HF upstream fails, 500 if audit fails
POST/v1/batch

Audit multiple responses in batch.

Request Body

{
"items": [
{
"text": "string",
"context": "string",
"model_source": "string"
}
]
}

Response (200 OK)

{
"audits": [
{
"q1": 0,
"q2": 0,
"height": 0,
"ece": 0,
"brier": 0,
"verdict": "string",
"confidence_interval": [
0
],
"explanation": "string",
"metadata": {
"additionalProp1": {}
},
"mode": "string",
"upstream_latency_ms": 0
}
],
"summary": {
"additionalProp1": {}
}
}

Response (422 Validation Error)

{
"detail": [
{
"loc": [
"string",
0
],
"msg": "string",
"type": "string"
}
]
}
POST/v1/compare

Compare calibration quality across multiple model outputs. Ranks models by epistemic uncertainty (Q2) and calibration metrics. Lower Q2 = more confident/reliable prediction.

Request Body

{
"prompt": "string",
"responses": [
{
"additionalProp1": {}
},
{
"additionalProp1": {}
}
]
}

Response (200 OK)

{
"prompt": "string",
"comparisons": [
{
"additionalProp1": {}
}
],
"ranking": [
{
"additionalProp1": {}
}
],
"best_model": "string",
"summary": {
"additionalProp1": {}
}
}

Response (422 Validation Error)

{
"detail": [
{
"loc": [
"string",
0
],
"msg": "string",
"type": "string"
}
]
}
Raises: HTTPException 400 if invalid input, 500 if comparison fails
POST/v1/calibrate

Perform audit with optional online calibration feedback. Supports optional ground truth or human feedback for online learning. Future versions will use this feedback to improve calibration.

Request Body

{
"text": "string",
"context": "string",
"ground_truth": 1,
"feedback": "string"
}

Response (200 OK)

{
"q1": 0,
"q2": 0,
"height": 0,
"verdict": "string",
"calibration_adjustment": 0,
"feedback_recorded": false
}

Response (422 Validation Error)

{
"detail": [
{
"loc": [
"string",
0
],
"msg": "string",
"type": "string"
}
]
}
Raises: HTTPException 500 if calibration fails

Need Help?

Explore interactive documentation or get in touch