This article argues that as GenAI moves into regulated and high-stakes environments, accuracy alone is no longer sufficient. Systems must produce outputs that are defensible—meaning they can be traced to specific data sources, generated within defined constraints, reviewed when necessary, and reconstructed through audit logs. It introduces four engineering pillars—provenance, constraints, review, and retention—as the foundation for building audit-ready AI systems that meet compliance and trust requirements.