Basic Examples

Get started quickly with these basic examples showing common use cases for AletheionGuard epistemic uncertainty detection.

Single Text Audit

The simplest way to audit a single piece of text for epistemic uncertainty.

# Python
from aletheion_guard import EpistemicAuditor
# Initialize auditor
auditor = EpistemicAuditor()
# Audit a statement
result = auditor.evaluate(
text="Paris is the capital of France"
)
# Check results
print(f"Q1 (Aleatoric): {result.q1:.3f}")
print(f"Q2 (Epistemic): {result.q2:.3f}")
print(f"Height: {result.height:.3f}")
print(f"Verdict: {result.verdict}")

Output:

Q1 (Aleatoric): 0.082
Q2 (Epistemic): 0.045
Height: 0.906
Verdict: ACCEPT

Using the REST API

Make a simple HTTP request to audit text using the REST API.

cURL

curl -X POST https://aletheionguard.onrender.com/v1/audit \
-H "X-API-Key: your_api_key" \
-H "Content-Type: application/json" \
-d '{
"text": "The Earth orbits the Sun"
}'

Python (requests)

import requests
url = "https://aletheionguard.onrender.com/v1/audit"
headers = {
"X-API-Key": "your_api_key",
"Content-Type": "application/json"
}
data = {"text": "The Earth orbits the Sun"}
response = requests.post(url, headers=headers, json=data)
result = response.json()
print(result)

JavaScript/Node.js

const response = await fetch(
"https://aletheionguard.onrender.com/v1/audit",
{
method: "POST",
headers: {
"X-API-Key": "your_api_key",
"Content-Type": "application/json"
},
body: JSON.stringify({
text: "The Earth orbits the Sun"
})
}
)
const result = await response.json()
console.log(result)

Auditing with Context

Provide additional context to improve audit accuracy, especially useful for domain-specific content.

from aletheion_guard import EpistemicAuditor
auditor = EpistemicAuditor()
# Without context - might have higher uncertainty
result1 = auditor.evaluate(
text="The half-life is approximately 5,730 years"
)
# With context - more precise audit
result2 = auditor.evaluate(
text="The half-life is approximately 5,730 years",
context="Carbon-14 dating in archaeology"
)
print(f"Without context - Q2: {result1.q2:.3f}, Verdict: {result1.verdict}")
print(f"With context - Q2: {result2.q2:.3f}, Verdict: {result2.verdict}")

Output:

Without context - Q2: 0.387, Verdict: MAYBE
With context - Q2: 0.112, Verdict: ACCEPT

Batch Processing

Process multiple texts efficiently in a single request to save on API calls and improve throughput.

from aletheion_guard import EpistemicAuditor
auditor = EpistemicAuditor()
# Multiple texts to audit
texts = [
"Water boils at 100°C at sea level",
"The moon is made of cheese",
"Bitcoin will reach $1 million tomorrow",
"Python is a programming language"
]
# Batch evaluate
results = auditor.batch_evaluate(texts, batch_size=32)
# Print results
for text, result in zip(texts, results):
print(f"Text: {text[:40]}...")
print(f" Q1: {result.q1:.3f}, Q2: {result.q2:.3f}")
print(f" Verdict: {result.verdict}\n")

Output:

Text: Water boils at 100°C at sea level...
Q1: 0.094, Q2: 0.067
Verdict: ACCEPT
Text: The moon is made of cheese...
Q1: 0.512, Q2: 0.089
Verdict: MAYBE
Text: Bitcoin will reach $1 million tomorro...
Q1: 0.234, Q2: 0.678
Verdict: REFUSED
Text: Python is a programming language...
Q1: 0.043, Q2: 0.021
Verdict: ACCEPT

Custom Thresholds

Adjust Q1 and Q2 thresholds based on your risk tolerance and use case requirements.

from aletheion_guard import EpistemicAuditor
# Conservative thresholds (stricter)
conservative_auditor = EpistemicAuditor(config={
"q1_threshold": 0.25, # Lower = stricter
"q2_threshold": 0.20 # Lower = stricter
})
# Permissive thresholds (more lenient)
permissive_auditor = EpistemicAuditor(config={
"q1_threshold": 0.50, # Higher = more lenient
"q2_threshold": 0.45 # Higher = more lenient
})
text = "The stock market will go up next week"
result1 = conservative_auditor.evaluate(text)
result2 = permissive_auditor.evaluate(text)
print(f"Conservative: {result1.verdict}")
print(f"Permissive: {result2.verdict}")

Use Case Guide:

  • Healthcare/Legal: Conservative thresholds (0.20-0.25)
  • General Q&A: Balanced thresholds (0.30-0.35)
  • Creative/Brainstorming: Permissive thresholds (0.45-0.50)

Error Handling

Properly handle errors and edge cases in your integration.

from aletheion_guard import EpistemicAuditor
from aletheion_guard.exceptions import (
ValidationError,
ModelLoadError,
RateLimitError
)
try:
auditor = EpistemicAuditor()
result = auditor.evaluate(
text="Paris is the capital of France"
)
if result.verdict == "ACCEPT":
print("High confidence - proceed")
elif result.verdict == "MAYBE":
print("Moderate confidence - verify")
else: # REFUSED
print("Low confidence - reject or escalate")
except ValidationError as e:
print(f"Invalid input: {e}")
except ModelLoadError as e:
print(f"Failed to load model: {e}")
except RateLimitError as e:
print(f"Rate limit exceeded: {e}")
except Exception as e:
print(f"Unexpected error: {e}")

Simple Content Filter

Use AletheionGuard as a simple filter to accept, flag, or reject content based on epistemic uncertainty.

from aletheion_guard import EpistemicAuditor
def filter_content(text: str) -> dict:
"""Filter content based on epistemic uncertainty."""
auditor = EpistemicAuditor()
result = auditor.evaluate(text)
return {
"text": text,
"verdict": result.verdict,
"confidence": result.height,
"action": {
"ACCEPT": "publish",
"MAYBE": "review",
"REFUSED": "reject"
}.get(result.verdict),
"q1": result.q1,
"q2": result.q2
}
# Example usage
contents = [
"2 + 2 = 4",
"It might rain tomorrow",
"I will win the lottery next week"
]
for content in contents:
result = filter_content(content)
print(f"{result['action'].upper()}: {content}")

Output:

PUBLISH: 2 + 2 = 4
REVIEW: It might rain tomorrow
REJECT: I will win the lottery next week

Real-World Use Cases

Customer Support Bot

Determine when to escalate to human agents based on response uncertainty.

result = auditor.evaluate(
text=bot_response,
context=user_question
)
if result.verdict == "REFUSED":
escalate_to_human()
else:
send_response(bot_response)

Content Moderation

Flag uncertain or potentially false content for manual review.

result = auditor.evaluate(
text=user_post
)
if result.q2 > 0.4:
flag_for_review(user_post)
elif result.verdict == "ACCEPT":
publish_immediately(user_post)

RAG Answer Quality

Audit RAG system responses before returning to users.

answer = rag_system.query(question)
result = auditor.evaluate(
text=answer,
context=question
)
if result.height < 0.6:
return "I'm not confident..."
else:
return answer

Automated Fact Checking

Identify claims that need fact-checking verification.

for claim in article_claims:
result = auditor.evaluate(claim)
if result.verdict != "ACCEPT":
fact_check_queue.add({
"claim": claim,
"priority": result.q2
})

Next Steps