Research reveals a plateau in AI trust; identifies autonomous systems as the primary security hurdle for 2027
Lineaje, the full-lifecycle, autonomous management software supply chain security company, released findings from its third annual on-site survey conducted at RSA Conference (RSAC) 2026. Results reveal a critical disconnect between enterprise AI adoption and security control. While organizations are racing to deploy AI-generated code at scale, the research highlights a lack of visibility and governance needed to manage these autonomous systems securely.
Among the 100 cybersecurity attendees surveyed, 86% report they have already integrated AI-generated code into their workflows. However, the data uncovers a dangerous disparity: while 89% of respondents believe they can secure this code, a mere 17% possess full visibility into it. This “blind confidence” is compounded by the fact that more than half (51%) admit adoption is officially outstripping their ability to maintain oversight.
“Confidence without visibility is a false sense of security. The findings reveal that while enterprises are racing to embrace AI-driven speed, they are doing so with a significant blind spot,” said Javed Hasan, CEO and co-founder of Lineaje. “To bridge this ‘confidence gap,’ organizations need more than manual oversight; they need an autonomous policy orchestrator that provides a complete AI Bill of Materials. Only by embedding governance directly into the development workflow can enterprises ensure their agentic AI applications are secure-by-design.”
Read More: SalesTechStar Interview with Mark Walker, CEO at Nue
Governance Emerges as the Top AI Security Challenge for 2027
Security leaders cited AI governance as their top challenge for 2027, followed closely by the rise of agentic AI and autonomous systems. However, the path to secure adoption is currently blocked by fragmented oversight: 45% of respondents report only a partial line of sight into their code, while 35% admit to having virtually no transparency at all. Without a centralized grasp of these AI-generated assets, organizations are struggling to enforce governance and identify hidden exposures across the enterprise.
Trust in AI Hits a Plateau
Seven out of ten respondents report that their trust in AI has not increased since RSAC 2025 – including 21% who say that their trust has actually declined. This data signals a definitive shift out of the early hype cycle and into a pragmatic, risk-aware phase where organizations demand more than just performance; they demand accountability.
Read More: How API-First SalesTech Is Redefining Revenue Operations?
A Three-Year Evolution: From Foundations to Governance
This year’s findings reflect a sophisticated shift in the market captured across three years of Lineaje’s research at RSAC. In 2024, the industry struggled with foundational gaps, as 84% of organizations had yet to implement a Software Bill of Materials (SBOM). By 2025, the focus pivoted to transparency, with 88% of leaders looking to AI as the “silver bullet” for supply chain visibility. In 2026, that optimism has met reality. The challenge has moved decisively beyond managing software component risk; enterprises are now tasked with the far more complex mandate of governing AI-generated code and autonomous systems at scale.
The Mandate for Unified AI Governance
The research highlights a near-universal consensus on the path forward: 90% of respondents identified a unified platform for governance, security, and policy compliance as essential. As organizations grapple with the dual challenges of AI-generated code and agentic AI applications, the demand for a single “control plane” has moved from a strategic preference to a critical operational requirement.
Closing the Governance Gap with Lineaje UnifAI
To meet this market mandate for unified oversight, Lineaje recently launched UnifAI, the industry’s first autonomous AI policy orchestrator. By providing a centralized control plane, UnifAI enables enterprises to discover their entire AI ecosystem, map the AI Bill of Materials, and enforce real-time security guardrails. This ensures that as organizations scale their agentic AI applications, they do so with the visibility and governance that Lineaje’s research proves is currently missing.













