DAXA, the data-first AI security and governance company purpose-built for agentic AI, today announced its acceptance into the NVIDIA Inception program, a milestone that validates the company’s approach to securing the agentic AI stack at enterprise scale.
As enterprises move agentic AI from pilot to production, the security gap is becoming harder to ignore. Agents act autonomously, touch live data, and make decisions at machine speed. Traditional security architectures were designed for human-operated software systems, not autonomous AI agents capable of reasoning, retrieving sensitive data, and taking actions across enterprise infrastructure.
Read More: SalesTechStar Interview with Ilyas Kurklu, Co-founder and CEO of Replenit
DAXA’s platform provides runtime data governance that travels with the agent, ensuring sensitive data stays protected as AI systems act. The NVIDIA Inception program brings DAXA into the ecosystem where the enterprise agentic AI stack is being built, from NeMo-powered reasoning systems to emerging runtime isolation and agent safety infrastructure.
While emerging NVIDIA technologies help provide runtime isolation and containment for autonomous agents, DAXA extends governance across the inference and data plane, ensuring agents not only run safely, but also access the right enterprise data, follow enterprise policy, and remain auditable end-to-end.
Read More: You Have Cloned Your Voice. Now Your AI Is Making Cold Calls
“Enterprises are rapidly deploying AI agents that operate autonomously across sensitive data and critical systems,” said Huseni Saboowala, Co-founder and CEO of DAXA. “The governance layer cannot remain fragmented across disconnected infrastructure, inference, and data security controls. It has to be holistic, embedded at runtime, and travel with the agent itself. We believe runtime governance will become foundational infrastructure for enterprise AI, and we are excited to help build that future within the NVIDIA ecosystem.”













