21CQ2
Research & Development

Studying how language models fail — and why it matters.

21CQ2 is an independent research practice focused on the reliability and interpretability of large language models, with particular attention to high-stakes deployment contexts in legal, compliance, and professional domains.

Current work examines how transformer attention mechanisms process — and systematically misprocess — negation, producing confident, fluent, and factually incorrect outputs in settings where correctness is not optional.

01
Mechanistic Interpretability
Attention-level analysis of failure modes in open-weight transformer models
02
Hallucination Detection
Structured decomposition and verification pipelines for LLM outputs
03
High-Stakes Deployment
LLM reliability in legal, compliance, and professional contexts
04
Applied AI Systems
iOS and API-based tools for real-world fact verification
Contact
Connor Mahon
connor@21cq2.com