Every AI application framework on the market requires the same thing: engineers. You want an AI-powered customer onboarding system? Write Python. You want an agent that routes support tickets intelligently? Write more Python. You want to change the routing logic because the business changed last quarter? Find an engineer, wait for a sprint, deploy a code release. The bottleneck is never the AI. It’s the engineering dependency between what the business needs and what the code does.
Upjack eliminates that dependency. It is an open-source declarative framework where AI applications are defined as JSON schemas and natural language skills, not imperative code. A manifest describes what the app is. Entity schemas describe what the app knows. Skills describe what the app does. Context files provide the background knowledge the app needs to make good decisions. MCP server connections give the app tools to act on external systems. The result is a complete AI application built from structured knowledge artifacts that anyone in the organization can read, modify, and own.
Full documentation and source code are at upjack.dev.
The Core Concepts
An Upjack application is a directory of human-readable files. Five types of artifacts compose the entire application.
Manifest
The manifest is a JSON file that declares the application’s identity and structure. App name, version, description, which schemas it uses, which skills it has, which MCP servers it connects to. Think of it as the table of contents. The runtime reads the manifest first and assembles the application from what it finds.
A manifest for a customer onboarding app might declare three entity schemas (Customer, Onboarding_Step, Compliance_Check), four skills (qualify-customer, assign-onboarding-track, check-compliance, generate-welcome-package), two context files (company-policies, product-catalog), and one MCP server connection (CRM integration). Twenty lines of JSON. The entire application architecture, declared.
Entity Schemas
Entity schemas are JSON Schema definitions of the business objects the application works with. Customers, orders, workflows, compliance checks: every noun in your domain gets a formal definition with required fields, valid states, relationships, and constraints.
A customer schema doesn’t describe a customer in a paragraph. It defines one as a data structure: segments with enum values, lifecycle stages with valid transitions, communication preferences with defaults, contract terms with date ranges. When an AI agent reads this schema, it knows exactly what a customer is and how customers relate to other entities. No guessing. No hallucinating fields that don’t exist.
This is Business-as-Code at its most concrete. The business entity definitions that traditionally live in requirements documents and database diagrams become the application itself. Change the schema, change the application. No code translation layer.
Skills
Skills are markdown files that describe processes, behaviors, and decision logic, the concept called Skills-as-Documents. A skill has defined structure: trigger conditions (when does this apply?), step-by-step procedures (what do you do?), decision branches (what if X happens?), exception handling (what goes wrong?), and expected outputs (what does success look like?).
Consider a “qualify-customer” skill. It doesn’t just say “check if the customer qualifies.” It specifies the scoring criteria, the threshold for each tier, what happens when a customer falls between tiers, which exceptions require human review, and what the output format looks like. This is the judgment a ten-year veteran applies automatically and that a new hire takes months to learn, captured in a structured document any AI agent can follow.
The critical insight: skills are written by domain experts, not engineers. A VP of Operations writes a better operations skill than a software engineer who needs to interview five people to learn the process. The format is markdown: headers, lists, plain language. The structure makes it machine-readable. The content makes it accurate.
Context Files
Context files provide background knowledge that makes schemas and skills coherent. Company policies, industry regulations, product catalogs, competitive positioning, historical patterns. Without context, an agent has data definitions and procedures but no judgment about when or how to apply them.
Two companies with identical customer schemas and identical qualification skills should still produce different outcomes if one is a B2B SaaS startup where every customer affects runway and the other is a B2C retailer where volume drives the business. Context is what enables that differentiation.
MCP Server Connections
MCP (Model Context Protocol) connections give the application tools to act on external systems. A CRM connection lets the agent read and update customer records. An email connection lets it send communications. A database connection lets it query operational data. Each connection is declared in the manifest, and the agent knows what tools it has and uses them within the skills it executes.
The connection model is standardized. Any MCP-compatible server works. NimbleBrain maintains 21+ open-source MCP servers at mpak.dev, and the ecosystem grows daily. You’re not locked into a proprietary integration layer.
The Mental Model: HTML for AI Apps
The best analogy for Upjack is HTML for web pages. Before HTML, building a web page required deep systems programming. HTML introduced a declarative layer: you describe the structure and content, and the browser handles rendering. You didn’t need to understand display drivers to make a web page.
Upjack does the same for AI applications. You describe the domain and the behavior, and the runtime handles orchestration, tool routing, state management, and error recovery. You don’t need to understand prompt engineering, chain composition, or agent architecture to make an AI application that works.
The analogy extends to ownership. Anyone can read HTML. Anyone can modify it. A designer, a content writer, a business analyst: all can work directly with HTML. Upjack apps work the same way. The schemas are JSON. The skills are markdown. The manifest is a list of declarations. A domain expert can open any file, understand what it does, and change it. The application updates immediately.
Why We Built It
NimbleBrain didn’t build Upjack because the world needed another framework. We built it because every existing framework required engineering skill to operate, and our engagements proved that the real bottleneck in AI adoption was never the model. It was the gap between business knowledge and running code.
On every engagement, we watched the same pattern: domain experts knew exactly what the AI should do, but they couldn’t express it in Python. Engineers could write Python, but they didn’t understand the domain deeply enough to encode the right logic. The translation layer between business and engineering introduced delay, distortion, and dependency.
Upjack removes the translation layer. The domain expert’s description of the process IS the application. The schemas they define ARE the data model. The skills they write ARE the behavioral logic. Nothing gets lost in translation because there is no translation.
We use Upjack on every client engagement. The schemas, skills, and context files we build during the Business-as-Code phase of an engagement are Upjack-compatible by default. By week two, those artifacts are a running application. By week four, the client owns a system they can keep running, update, and grow independently, which is the architectural guarantee behind Escape Velocity.
A Simple Example: Customer Onboarding
A customer onboarding app in Upjack looks like this. The manifest declares three schemas (Customer, Onboarding_Step, Compliance_Check), four skills, one context file with company policies, and an MCP connection to the CRM. Total: roughly 20 lines of JSON for the manifest, three schema files averaging 30-40 lines each, four skill files in markdown, and a context document.
The agent reads the manifest, loads the schemas to understand the domain, loads the skills to understand the processes, loads the context for background knowledge, and connects to the CRM through the MCP server. Now it can qualify customers, assign onboarding tracks, run compliance checks, and generate welcome packages, all following the specific logic the domain expert encoded in the skill files.
When the onboarding process changes (a new compliance requirement, a new customer tier, a seasonal adjustment), the domain expert opens the relevant skill file, updates the logic in plain language, and the application reflects the change. No code review. No deployment pipeline. No engineering ticket.
That’s Upjack. Describe what the app does. Let the framework handle how. For a hands-on walkthrough, see Building Your First Upjack App in 30 Minutes. For a comparison with other frameworks, see Upjack vs. LangChain vs. CrewAI. To see how Upjack powers real engagements, see How Upjack Powers NimbleBrain Engagements.
Frequently Asked Questions
What does 'declarative' mean in this context?
Declarative means you describe the desired outcome rather than programming the steps. In Upjack, you define entities, skills, and context in JSON schemas and natural language, and the framework figures out how to execute. It's the difference between writing a recipe and ordering from a menu.
Do I need to know how to code to use Upjack?
To build basic apps, no. Upjack apps are defined in JSON schemas and natural language skill files, and a domain expert can modify both. To extend the framework or build custom integrations, yes, Python or TypeScript is required.
Is Upjack open source?
Yes. Fully open-source under MIT license. You can read the code, fork it, modify it, and self-host it. NimbleBrain uses Upjack on every engagement, so the framework is continuously improved through real production usage.