A 4-week AI sprint produces 8-12 production automations running on your data, connected to your systems, governed by your rules. Not a demo. Not a proof of concept. Production systems that handle real business operations from the moment they go live.
This page breaks down exactly what happens in each week: what NimbleBrain delivers, what your team contributes, and what you walk away with. For the strategic case for speed over long timelines, see Sprint vs. Marathon. For real outcomes from real engagements, see Real Engagement Timelines.
Before Week 1: The Scope Call
Every engagement starts with a scope call. One to two hours. The goal is simple: define the problem, the target process, and what success looks like.
This is not a discovery phase. Discovery phases take 4-6 weeks and produce documents that nobody references once building starts. The scope call produces a Statement of Work: fixed price, fixed timeline, fixed deliverables. You know what you are buying before you sign anything.
What we need from you: an executive sponsor who owns the budget and a domain expert who knows the target process inside out. The sponsor decides. The expert teaches. If both are in the room, we can scope a meaningful engagement in a single session.
Most clients go from first call to Week 1 kickoff in under two weeks. Contracts are straightforward because the scope is fixed. There is nothing to negotiate when both sides know exactly what gets delivered.
Week 1: Domain Analysis and Business-as-Code
The NimbleBrain team embeds in your operations. Not in a conference room running workshops. In the process itself, watching how your people actually work, where decisions get made, what information flows between systems, and where the gaps are that only experienced operators notice.
What NimbleBrain delivers:
The foundation is Business-as-Code: your business domain encoded as machine-readable artifacts that agents can execute against.
Entity schemas (10-15 definitions). JSON Schema definitions of every business entity relevant to the engagement. Customers, orders, contracts, approval chains, SLA tiers, pricing rules. Every noun that matters. Each schema defines required fields, valid states, relationships, and constraints. These are not database schemas. They are the definitions your agents will use to understand your business.
Operational skills (15-20 documents). Structured markdown that captures the decision logic your domain experts carry in their heads. How to evaluate a purchase order against contract terms. When to escalate a customer complaint versus resolve it directly. Which invoices require manual review and why. Each skill encodes triggers, decision steps, exception handling, and expected outputs.
Context documents. Organizational background that makes schemas and skills coherent: industry position, team structure, customer segments, regulatory constraints, strategic priorities. The context that turns a generic agent into one that operates like it has been on your team for a year.
What your team contributes (4-6 hours):
Domain expertise. Your domain expert teaches us the process, not by presenting slides, but by walking through real work. Real orders, real escalations, real exceptions. The knowledge audit surfaces the rules that live in your expert’s head but have never been written down. Your expert reviews every schema and skill to confirm they match reality. If something is wrong, it gets fixed the same day.
End-of-week deliverable: Complete Business-as-Code foundation, reviewed, validated, and ready for agents to build on. Architecture plan documenting which automations will be built, which integrations are needed, and what governance is required.
Week 2: Skill Authoring and First Agent
Building starts. The Business-as-Code foundation from Week 1 gives agents everything they need to operate on your business domain.
What NimbleBrain delivers:
MCP server integrations (3-5 connections). Every system your agents need to interact with (CRM, ERP, email, databases, project management tools) gets a dedicated MCP server that provides a standardized interface. Real authentication. Real error handling. Real data flowing in both directions.
First working agent on real data. Not a demo on sample data. An agent handling actual business operations, reading from your systems, applying your business rules, producing outputs that go into your production workflows. The first automations are live by mid-week.
Skill refinement. Running agents on real data exposes gaps in the Business-as-Code artifacts immediately. A skill that missed an edge case. A schema that does not account for a customer tier. A context document that references an outdated policy. These gaps surface in hours, not months, because agents validate their own context every time they execute.
What your team contributes (4-6 hours):
Your domain expert reviews agent outputs against their own judgment. Is the pricing calculation correct? Did the escalation route to the right person? Would you have made the same decision? This is not QA in the traditional sense. It is the domain expert teaching the system by validating its behavior. Every correction becomes a skill update that prevents the same error from recurring.
End-of-week deliverable: 3-5 working automations processing real data. Integrations live and tested. First measurable results from production operations.
Week 3: Integration, Scale, and Governance
Week 3 expands from the initial automations to the full set. The methodology makes this fast. Each new automation follows the same pattern (schema + skill + context + MCP server + agent), and the foundation from Weeks 1-2 already covers most of the entities and domain knowledge.
What NimbleBrain delivers:
Full automation suite (8-12 total). New automations deploy rapidly because they build on the existing Business-as-Code foundation. The entity schemas are already defined. The context is already captured. Each new automation needs only its specific skills and any additional MCP connections.
Governance layer. Approval workflows for automations that touch financial data. Audit trails for compliance-sensitive operations. Human-in-the-loop checkpoints for decisions above defined thresholds. The governance is not bolted on. It is encoded in the Business-as-Code artifacts from the start. A skill specifies when human approval is required. A schema defines the audit fields.
Edge case handling. The 80/20 rule applies to every AI deployment. The first automations handle the 80% of straightforward cases. Week 3 addresses the other 20%: the exceptions, the unusual inputs, the scenarios that occur twice a quarter but matter when they do. Every edge case caught is a skill update that makes the system more reliable.
Production monitoring. Dashboards showing agent activity, success rates, error patterns, and performance metrics. Alerts for failures, anomalies, and degradation. Your team will use these same dashboards after the engagement ends.
What your team contributes (4-6 hours):
Your domain expert validates the expanded automations. Your executive sponsor reviews the governance model. If you have an IT or security team, they review the integration architecture and access controls. This is the week where organizational readiness meets technical readiness.
End-of-week deliverable: 8-12 automations running in production. Governance layer operational. Monitoring and alerting live. Edge cases handled and documented.
Week 4: Production Deployment and Handoff
Week 4 is finalization, not launch. The systems have been running on real data since Week 2. This week ensures everything is stable, documented, and transferred.
What NimbleBrain delivers:
Production stabilization. Final performance tuning. Load validation. Failover testing. Confirmation that all automations operate within acceptable error rates under production conditions.
Operational runbooks. Practical, tested procedures for the tasks your team will perform daily: how to update a skill when a business rule changes, how to deploy a modified MCP server, how to diagnose why an agent produced the wrong output, how to add monitoring for a new automation. Each runbook was validated during the engagement. Your team uses them while we are still present, so gaps get caught and filled in real time.
Troubleshooting guide. Common failure modes, diagnostic steps, and resolution procedures. When something breaks (and something will eventually break), your team has a structured path to identify the cause and fix it.
Training session (2-3 hours, recorded). Walkthrough of every system, every artifact, every operational procedure. Your team operates the systems during the training, making changes, deploying updates, running diagnostics in real time. Recorded so new team members can onboard without anyone explaining the system from scratch.
Independence Kit. The complete package: running production systems, all Business-as-Code artifacts, packaged MCP servers, operational documentation, recorded training. Everything your team needs to reach Escape Velocity, the point where you operate, improve, and extend the AI systems without external dependency.
What your team contributes (6-8 hours):
Active participation in the handoff. Your team operates the systems during training. They run through the runbooks. They make a skill update and deploy it. They diagnose a simulated issue and resolve it. The handoff is not a presentation. It is supervised practice.
End-of-week deliverable: Full production deployment. Independence Kit delivered. Team trained and operating. The engagement is complete. The Escape Velocity path is defined, and your team is on it.
The Math
Pre-engagement: 1-2 weeks (scope call, SOW, contracts). Week 1: domain analysis, 10-15 entity schemas, 15-20 skills, architecture plan. Week 2: 3-5 MCP integrations, first agent on real data, 3-5 automations live. Week 3: scale to 8-12 automations, governance layer, monitoring, edge cases. Week 4: stabilization, documentation, training, Independence Kit.
Total elapsed: 6 weeks from first conversation to fully independent operation. Total productive building time: 4 weeks. Total deliverables: 8-12 production automations, complete Business-as-Code foundation, packaged integrations, operational documentation, trained team.
A traditional consultancy would still be writing the discovery report. The 4-week sprint produces the outcome they spend 6 months building toward, and delivers it to a team that can sustain it independently. That is not a claim about working faster. It is a claim about working differently. The Embed Model eliminates the phases that do not produce production systems. What remains is four weeks of building, shipping, and transferring capability.
Frequently Asked Questions
What happens before Week 1?
A scope call (1-2 hours) where we define the problem, the target process, and the success criteria. Then a Statement of Work with fixed price and timeline. Most clients go from first call to Week 1 start in under 2 weeks.
What does the client team do during the 4 weeks?
Domain expertise: you teach us the business process. We need 4-6 hours per week from a domain expert (the person who knows the process inside out) and 1-2 hours per week from the executive sponsor. Your domain expert participates in skill authoring, reviews output, and validates behavior.
Is 4 weeks really enough for production AI?
Yes, for a focused, well-scoped engagement. We're not building a general-purpose AI platform. We're deploying a specific AI system for a specific business process. Focused scope + experienced team + proven stack = production in 4 weeks.