On November 14, 2025, over one million developers woke up to discover that their primary coding tool had stopped working.

Not because of a bug. Not because of an outage. Not because of a security incident. Windsurf (formerly Codeium, a code editor used by over 1M developers) stopped working because Anthropic revoked its API access. A contract dispute between two companies. A business decision made in a boardroom. And overnight, the developers who depended on that tool had nothing.

No warning. No migration path. No data export. No way to recover their workflows, their customizations, their integrations. The tool was proprietary. The infrastructure was proprietary. The dependency was total. When the API key stopped working, everything stopped working.

This was not a black swan. It was the entirely predictable consequence of building critical infrastructure on someone else’s proprietary foundation. Windsurf built its product on a single API provider. When that provider pulled access, the product ceased to exist. Every developer who depended on Windsurf was collateral damage in a contract negotiation they had no visibility into and no influence over.

The Windsurf incident is not a story about Windsurf or Anthropic. It is a story about what happens when you build on infrastructure you cannot inspect, cannot fork, and cannot self-host. It is the story that every organization running proprietary AI tools should treat as a preview of their own future.

The Claim

Every proprietary AI tool you adopt creates a dependency you do not control. The vendor controls the pricing. The vendor controls the feature roadmap. The vendor controls whether the tool continues to exist. You control nothing except the decision to keep paying.

This is not a new observation. The enterprise software industry has been dealing with vendor lock-in for decades. But AI infrastructure raises the stakes dramatically, because AI tools are not commodities. They are operational nervous systems. When an AI agent manages your approval workflows, processes your customer communications, or coordinates your internal operations, losing that tool is not an inconvenience. It is an operational crisis.

The Windsurf incident crystallized this in a way no white paper could. One million developers did not lose access to a nice-to-have feature. They lost their primary working environment. And they lost it instantly, without recourse, because the infrastructure was proprietary end to end.

The open-source argument is not ideological. It is structural. Open source is the only trust model for AI infrastructure because it is the only model where trust is verifiable. You can read the code. You can audit the dependencies. You can run it on your own servers. You can fork it if the maintainer disappears. You can modify it if your needs diverge. None of these things are possible with proprietary infrastructure.

Trust without transparency is faith. And faith is not an infrastructure strategy.

The Evidence

NimbleBrain’s open-source stack

NimbleBrain open-sources everything we build. As a structural commitment to a specific model of trust: if you cannot inspect it, you cannot trust it. If you cannot fork it, you do not own it.

Upjack. Declarative AI application framework. Apps are defined as JSON schemas and natural language skills. Every line of the framework is published, readable, and auditable. There is no hidden runtime, no proprietary execution layer, no compiled binary that phones home. Clients who deploy Upjack applications own the full stack. They can read every function that processes their data. They can modify the framework to fit their domain. If NimbleBrain disappeared tomorrow, every Upjack application would continue running exactly as it does today.

That last point matters. The test of real infrastructure ownership is not “does it work while the vendor exists?” The test is “does it work if the vendor disappears?” Upjack passes that test. Windsurf did not.

mpak. MCP bundle registry with security scanning. The registry itself is self-hostable. Enterprise clients can run their own internal registry behind their firewall, with their own security policies, disconnected from any external service. The search, discovery, installation, and verification pipeline works entirely on-premises if needed.

This is not a theoretical capability. Clients operating in regulated industries (defense, healthcare, financial services) require air-gapped deployments where no data leaves the network boundary. mpak’s architecture supports this because it was designed for this. A proprietary registry with a SaaS-only deployment model cannot serve these clients. Period.

21+ MCP servers. Enterprise integrations, all published. Every MCP server we build ships with full source code. CRM integrations, productivity tools, data services, communication platforms, all open, all forkable, all independently maintainable. When a client needs to modify how the Salesforce integration handles custom objects, they modify the source code. They do not file a feature request and wait six months.

This is what real ownership looks like at the integration layer. Proprietary integrations create a permanent dependency on the vendor’s update cycle. Open-source integrations put the client in control of their own timeline.

MCP Trust Framework. Open security standard. This is where the argument comes full circle. It is not enough to open-source the tools. The standard by which those tools are evaluated for security must also be open. A proprietary security certification is an oxymoron, you are trusting a vendor to certify their own trustworthiness.

The MCP Trust Framework covers authentication, authorization, data handling, supply chain integrity, and runtime security for MCP servers. It is published openly. Anyone can contribute, audit, or adopt it. It is integrated into mpak’s security scanning so that every bundle in the registry is evaluated against a public standard, not a private checklist.

Trust in AI infrastructure cannot be proprietary. If the security standard is controlled by one company, it serves that company’s interests. An open standard serves the ecosystem.

The business case for ownership

The open-source model creates a different kind of client relationship. Instead of customers who depend, it creates customers who grow.

Clients who own their infrastructure invest more confidently. When you know you can read the source code, modify it, and run it independently, you build on top of it. You extend the schemas. You write new skills. You hire developers who understand the stack because the stack is Python and TypeScript, not a vendor-specific DSL that requires certified specialists.

This is the Business-as-Code principle applied to infrastructure itself. Schemas and skills are open formats: JSON Schema and Markdown. The tools that process those formats are open source. The security standards that validate those tools are open. At no point in the chain does a proprietary dependency gate your ability to operate, extend, or maintain your AI systems.

Contrast this with the proprietary model. Every extension requires the vendor. Every customization requires the vendor’s professional services team. Every hire must know the vendor’s proprietary platform. When the vendor raises prices (and they will) you have no alternative because your entire operational layer is built on their closed infrastructure.

Open source eliminates that dynamic. Not through goodwill. Through architecture.

Post-engagement independence

The Embed Model (embed, build, transfer, leave) requires open source as a structural precondition. You cannot transfer ownership of proprietary tools. You can transfer a license to use them, but a license is not ownership. A license is permission that can be revoked.

Every NimbleBrain engagement ends with the client holding full source code, documentation, and the operational knowledge to run the system independently. MIT and Apache 2.0 licenses. No usage tracking. No phone-home. No license server. No annual renewal that holds the system hostage.

This is what Escape Velocity looks like at the infrastructure level. The client’s AI system is self-sustaining. It does not depend on NimbleBrain’s continued existence. It does not depend on a vendor’s continued good behavior. It runs on infrastructure the client owns, maintained by developers the client employs, governed by standards the client can verify.

The Counterarguments

”Open source has no support”

During an engagement, NimbleBrain provides direct support, embedded engineers who built the tools and know every edge case. Post-engagement, clients have full source code, complete documentation, and a codebase written in standard Python and TypeScript. They do not need vendor-certified specialists. Any competent developer who knows the language can maintain and extend the system.

Compare this to the proprietary support model. The vendor offers a support SLA, response times, escalation paths, guaranteed uptime percentages. That SLA covers availability of a platform you do not own. If the vendor decides to deprecate a feature, your SLA does not help. If the vendor changes their pricing, your SLA does not help. If the vendor gets acquired and the product roadmap shifts, your SLA does not help.

Open-source support is not a vendor’s promise to keep answering your calls. It is the structural ability to solve your own problems with your own team. That is a more durable form of support than any SLA.

”Proprietary tools have better features”

Sometimes true in the short term. Proprietary vendors can focus development resources on polished UX and advanced features that open-source projects have not built yet. This is a real advantage for the first six months.

But features are temporary advantages. Ownership is permanent. The best feature set in the world is worthless if the vendor can revoke access, as one million Windsurf developers discovered. Features also come with constraints: you get what the vendor builds, on the vendor’s timeline, with the vendor’s priorities. If the feature you need is not on their roadmap, you wait. Or you hire their professional services team at $300/hour to build it for you on their closed platform.

With open source, you build the feature yourself. Or you hire any developer to build it. Or you contribute it upstream and the entire community benefits. The feature gap, if it exists, closes on your timeline, not the vendor’s.

”We trust our vendor”

The developers who used Windsurf trusted Codeium. Codeium trusted Anthropic. Trust was not the issue. The issue was that trust, no matter how genuine, does not change the structural dynamics of proprietary infrastructure. Trust does not prevent contract disputes. Trust does not prevent acquisitions. Trust does not prevent a vendor from deciding that your market segment is no longer strategically important.

Trust is an emotion. Infrastructure is architecture. You do not build mission-critical systems on emotions. You build them on verifiable properties: inspectable code, forkable repositories, self-hostable deployments. These properties hold whether you trust the vendor or not. They hold whether the vendor exists or not.

The question is not “do you trust your vendor?” The question is “what happens when trust is no longer relevant?” Open-source infrastructure answers that question: everything keeps running.

”Maintaining open-source tools is expensive”

Maintaining proprietary vendor dependencies is more expensive. You just do not see the cost until it arrives.

Vendor pricing goes up. The introductory rate that made the business case work gets replaced by enterprise pricing that changes the ROI entirely. Features get deprecated. The integration you built your workflow around gets sunset with 90 days notice. APIs change without backward compatibility. The upgrade that was supposed to take a sprint takes a quarter.

Open-source maintenance is a visible, predictable cost. You know the codebase. You control the update timeline. You decide when to upgrade and what to change. There are no surprise pricing changes. There are no forced migrations. There are no deprecation notices from a vendor whose roadmap you have no input on.

The most expensive infrastructure is the infrastructure that holds you hostage while sending you a bill.

The Conclusion

The AI infrastructure layer is being built right now. Right now, today. The tools, frameworks, registries, and security standards that organizations adopt today will determine whether they own their AI systems or rent them for the next decade.

NimbleBrain’s position is unambiguous. We open-source every tool we build. We transfer full ownership on every engagement. We built the MCP Trust Framework so the entire ecosystem (not just our clients) can verify what it is running. We designed every component to be inspectable, forkable, and self-hostable because those are the properties that make infrastructure trustworthy, not marketing claims and vendor promises.

The Windsurf incident was a $0 lesson for anyone paying attention. It cost nothing to watch and learn that proprietary AI infrastructure is a single-point-of-failure disguised as a product. The developers who paid the real price were the ones who built their workflows on a closed platform and discovered (overnight, with no warning) that they owned nothing.

The Anti-Consultancy position on infrastructure is simple: own everything. Inspect everything. Fork if needed. If your AI infrastructure provider cannot show you the source code, cannot let you self-host, and cannot give you a migration path off their platform, you do not have infrastructure. You have a dependency.

Dependencies get revoked. Ownership does not.


Frequently Asked Questions

What was the Windsurf incident?

In 2025, Anthropic terminated Windsurf's API access over a contract dispute. Windsurf (formerly Codeium) had 1M+ developers using their AI coding tool, which was built on Claude. When API access was cut, the tool stopped working. Developers had no fallback, no data export, and no migration path. It demonstrated the single-point-of-failure risk of building on proprietary AI infrastructure.

Does open source mean less secure?

The opposite. Open-source code can be audited by anyone, vulnerabilities are found and fixed faster. Proprietary code hides vulnerabilities behind access control. The MCP Trust Framework (mpaktrust.org) provides a security standard specifically for MCP server infrastructure, ensuring that open-source tools meet enterprise security requirements.

What does NimbleBrain open-source?

Everything we build for client engagements: Upjack (declarative AI app framework), mpak (MCP server registry), our MCP servers (21+), our skills framework, and the MCP Trust Framework. Clients get full source code with MIT or Apache 2.0 licenses. No proprietary dependencies.

What about proprietary AI models like Claude?

Models are different from infrastructure. Using Claude as an LLM provider is a reasonable dependency, the model is the commodity, and you can switch providers. The infrastructure layer (tools, integrations, frameworks, orchestration) is where lock-in is dangerous. That's what we open-source.

Can enterprise clients self-host NimbleBrain tools?

Yes. Every tool is designed for self-hosting on standard infrastructure (Kubernetes, Docker). No phone-home, no license server, no usage tracking. Clients who need air-gapped deployments or strict data residency can run everything on their own infrastructure.

What is the MCP Trust Framework?

The MCP Trust Framework (MTF) is an open security standard for evaluating and certifying MCP servers. It covers authentication, authorization, data handling, supply chain integrity, and runtime security. Published at mpaktrust.org and integrated into mpak's security scanning. Because trust in AI infrastructure shouldn't be proprietary either.

Ready to put this thesis
into practice?

Or email directly: hello@nimblebrain.ai