blog-banner

Cockroach Connect Supper Club: AI Is Not a Feature. It’s a Systems Problem.

Published on April 22, 2026

0 minute read

    Last week in San Francisco, Cockroach Labs hosted a private dinner: Cockroach Connect: The Enterprise Reckoning.

    Held at Mosa Supper Club, the evening brought together platform leaders, founders, and operators from Walmart, PayPal, Salesforce, Roblox, Ford, Geico, UPS, Citi, Together AI, and Ory, alongside investors like FirstMark. The discussion was led by Peter Mattis (CTO & Co-founder, Cockroach Labs), Jeff Hickman (Head of Customer Engineering, Ory) and Aman Kabeer (Principal, FirstMark). It quickly turned into something candid and unfiltered, grounded in what is actually happening in production rather than what shows up in demos.

    There were no slides or polished takes. Just a room full of people already dealing with the complexity of putting AI to work, and a shared recognition that something fundamental has shifted. AI is no longer experimental. It is operational. And the infrastructure underneath it was not built for what is coming next.

    CockroachDB Cockroach Connect SF 2026 panel webp

    (l-r) Aman Kabeer, Principal, FirstMark; Jeff Hickman, Head of Customer Engineering, Ory; Peter Mattis, CTO & Co-founder, Cockroach Labs; and Igor Stanko, VP, Product, Cockroach Labs led a lively discussion at Cockroach Connect.

    A conversation over dinnerCopy Icon

    The discussion unfolded over a three-course meal of whipped fromage in cucumber cups, a citrus salad, and koji marinated filet mignon. What started as excitement quickly moved into something more honest: what actually breaks when AI moves into production.

    Early in the conversation, it was clear teams are seeing tangible gains. Work that used to take days, like competitive analysis or internal research, now takes minutes. That shift isn’t just about efficiency. It’s changing how teams scope problems, what they choose to work on, and how quickly they move.

    But that optimism did not last long without challenge. The conversation quickly shifted to a more grounded reality. What works in a demo environment often falls apart when it meets real systems, real data, and real constraints.

    CockroachDB Cockroach Connect SF 2026 menu webp

    Connect cuisine

    AI is moving from experimentation to productionCopy Icon

    AI is no longer confined to engineering teams. One of the most striking shifts discussed was how quickly non-technical teams are adopting it. Legal, finance, and HR are now building tools and workflows that previously required engineering support. In some cases, organizations are seeing an explosion of internal applications driven by people who were never “builders” before.

    That bottom-up momentum is powerful, but it also introduces chaos. There is no longer a single team controlling how AI is introduced or used. Instead, adoption is happening everywhere at once.

    At the same time, many organizations are still hesitating. They are waiting for better standards, more mature tooling, or a clearer “right way” to adopt AI. The consensus in the room was that this instinct is understandable, but risky. Teams that are building today are not just shipping faster. They are learning faster, and that learning compounds.

    From models to systems: what actually changesCopy Icon

    One of the clearest misconceptions the group dismantled is that production AI is primarily a model problem. In practice, it is not. The real shift is from models to systems. In experimentation, teams focus on prompts, benchmarks, and model performance. In production, those concerns become secondary to integration, orchestration, and data access.

    AI does not operate in isolation. It interacts with existing enterprise systems, pulls from multiple data sources, and triggers real-world actions. This creates pressure on infrastructure that was never designed for machine-driven execution at this scale. As one participant put it, AI is not something you “add” to a system. It forces you to rethink the system itself.

    Data is the reasoning engineCopy Icon

    Another theme that came up repeatedly is that the model is rarely the limiting factor. Context is.

    The effectiveness of an AI system depends on whether it has access to the right data, whether that data is consistent, and whether it is being applied at the right time.

    This is where many systems begin to break down. Data across enterprises is often fragmented, inconsistent, and difficult to access in real time. When agents operate in an incomplete or stale context, they do not fail gracefully. They act on it.

    That is why context engineering is becoming critical. It is not just about feeding more data into a model. It is about structuring, constraining, and governing that data so the system can reason correctly.

    Coordination becomes the challenge at scaleCopy Icon

    As the conversation deepened, coordination emerged as one of the hardest problems to solve.

    Agents aren't operating alone. They're interacting across APIs, services, and business units simultaneously, creating failure modes most architectures weren't designed to handle. A small data inconsistency in one system can trigger incorrect actions in another, and at machine speed, those errors cascade before anyone notices.

    Systems designed for human-paced interaction — where a person reviews, approves, retries — struggle under constant automated execution. The challenge isn't just throughput. It's maintaining consistency across a sprawling environment where agents are making decisions continuously.

    CockroachDB Cockroach Connect SF 2026 dinner Webp

    Dinner + data: A grand convergence at the Cockroach Connect Supper Club in San Francisco.

    The "Lethal Trifecta" of AI SecurityCopy Icon

    Security was one of the most urgent topics of the night, and for good reason.

    AI agents combine three capabilities that were historically separate. They can access sensitive data, take action within systems, and respond to external input. Together, this creates what one participant described as a “lethal trifecta.”

    This is not a theoretical risk. It is already happening in subtle ways. Agents are being granted broad access to systems in order to be useful, but that access often exceeds what would be considered acceptable in a human context. At the same time, restricting access too tightly limits the value of the system.

    What makes this especially challenging is that traditional governance models are reactive. They rely on logs, monitoring, and after-the-fact analysis. That approach does not hold up when decisions are being made and executed continuously.

    The shift is toward active governance. Policies need to be enforced at the point of execution. Every action needs identity, attribution, and traceability. And increasingly, those controls need to live at the data layer, not just at the application or model layer.

    Speed vs. safety is the wrong framingCopy Icon

    There was a clear tension throughout the conversation around how quickly to move.

    On one hand, the risks are real. Security, governance, and system reliability are all harder in an AI-driven environment. On the other hand, the gains are too significant to ignore. AI is not delivering marginal improvements. It is delivering step-function changes in productivity.

    Framing this as a choice between speed and safety misses the point. The real tradeoff is between inaction and resilience. The companies that move forward successfully are not the ones that eliminate risk entirely. They are the ones that build systems capable of detecting issues, containing failures, and recovering quickly. The fundamentals of good engineering still apply. If anything, they become more important.

    The shift to industrial AI infrastructureCopy Icon

    By the end of the night, the conversation had moved past tools and frameworks into a more fundamental question: what does infrastructure need to look like when AI isn't an experiment, but a production system running continuously, at scale, across every part of the business?

    The answer kept coming back to resilience — not as a feature, but as an architectural property. Systems that survive failures, maintain consistency across regions, and hold up under sustained machine-driven pressure.

    It's the kind of environment CockroachDB was designed for: distributed, strongly consistent, resilient by default. Continuous execution, global coordination, zero tolerance for downtime or data inconsistency — these aren't aspirational goals. They're architectural requirements.

    The defining question isn't what AI can do anymore. It's whether your infrastructure can keep up.

    See how CockroachDB enables resilient, distributed systems for AI in production. Speak with an expert.

    CockroachDB Cockroach Connect SF 2026 signage webp

    AI