The Knowledge Audit: How to Find and Prioritize What to Encode
You’ve recognized the tribal knowledge problem. Your best people carry critical business logic in their heads, and your AI agents can’t access any of it. The next question is practical: where do you start?
A knowledge audit is the systematic process of finding who knows what, mapping how they make decisions, and prioritizing which knowledge to encode first. It’s not a six-month documentation marathon. Done right, it’s a focused sprint that delivers an actionable encoding backlog within two weeks.
Who to Interview
Not everyone holds tribal knowledge equally. You’re looking for specific profiles:
The long-tenured operators. People who’ve been in the role 3+ years and handle the edge cases. They’re the ones new hires are told to shadow. If you hear “ask Sarah about that” more than twice a week, Sarah is on your interview list.
The exception handlers. Whoever gets pulled in when the standard process breaks. They might not be the most senior person. Sometimes it’s a mid-level specialist who’s become the de facto expert on a narrow but critical domain.
The onboarding bottlenecks. Ask your most recent hires: “What took the longest to learn? Who did you go to most?” The answers point directly to undocumented knowledge.
The cross-functional connectors. People who bridge departments. They know the handoff points, the informal agreements between teams, and the workarounds that keep processes flowing across system boundaries.
Start with 3-5 people. You can always expand, but a small group surfaces 80% of the critical tribal knowledge in most process areas.
What Questions to Ask
The goal is to surface decision logic, not just process steps. Your wiki already has the process steps. You need the layer underneath: the judgment, the exceptions, the “it depends.”
Start broad, then drill in:
- “Walk me through how you handle [process X] when everything goes normally.”
- “Now tell me about the last time it didn’t go normally. What happened?”
- “What are the top 3 situations where the standard process doesn’t apply?”
- “When a new person gets this wrong, what do they usually miss?”
- “If you were out for a month, what would go sideways first?”
Surface the decision points:
- “When you look at [an order / a request / a case], what do you check first?”
- “What makes you decide to handle something one way versus another?”
- “Are there customers, accounts, or situations that get different treatment? Why?”
- “What rules exist that aren’t written down anywhere?”
Find the “it depends” answers:
- “When someone asks you how to handle X and you say ‘it depends,’ what does it depend on?”
- “What information do you wish you had earlier in this process?”
- “What would break if we automated this exactly as documented?”
Record the interviews. The specific phrasing people use to describe their decision logic often becomes the basis for skills.
Map the Decisions
After interviews, you’ll have a collection of stories, exceptions, and decision rules. Now structure them.
For each process area, build a decision map with four columns:
| Decision Point | Inputs | Logic | Current Location |
|---|---|---|---|
| Customer pricing | Account tenure, volume, contract type | Standard rate unless 5+ years (10% discount) or government (GSA schedule) | Sarah’s memory |
| Order routing | Order value, product type, account flags | Standard flow unless hazmat (→ safety review) or VIP accounts (→ skip senior review) | Team chat history |
| Escalation path | Issue type, customer tier, previous attempts | Billing under $500 → auto-credit. Technical → engineering direct. Named accounts → dedicated manager | Shared tribal knowledge |
This map does two things. First, it makes the implicit explicit: you can see exactly where tribal knowledge lives and what form it takes. Second, it reveals the data model underneath. Those “inputs” column entries become schema fields. The “logic” entries become skills.
Business-as-Code starts to take shape here. Each row in your decision map is a candidate for encoding as a structured artifact that AI agents can read and execute.
Score by Frequency x Impact
You can’t encode everything at once. You need to prioritize. The simplest effective framework: score each piece of tribal knowledge on two dimensions.
Frequency: How often does this decision get made?
- Daily (3 points)
- Weekly (2 points)
- Monthly or less (1 point)
Impact: What happens when this decision is wrong?
- Revenue loss or customer churn (3 points)
- Delay or rework (2 points)
- Minor inconvenience (1 point)
Multiply the scores. A decision that happens daily (3) and causes customer churn when wrong (3) scores 9. Encode it first. A decision that happens monthly (1) and causes minor inconvenience (1) scores 1: it can wait.
Here’s what a scored backlog looks like:
| Knowledge Item | Frequency | Impact | Score | Status |
|---|---|---|---|---|
| VIP account pricing exceptions | 3 (daily) | 3 (revenue) | 9 | Encode first |
| Hazmat routing rules | 2 (weekly) | 3 (compliance) | 6 | Encode second |
| Onboarding flow customization | 2 (weekly) | 2 (rework) | 4 | Batch 2 |
| Quarterly reporting adjustments | 1 (monthly) | 2 (rework) | 2 | Backlog |
This scoring produces a prioritized encoding backlog: the roadmap for your Context Engineering work. The top items become your first sprint of schema and skill development.
The Knowledge Audit Checklist
Use this checklist to run an audit for a single process area. Repeat for each area you want to prepare for AI.
Preparation (Day 1)
- Identify the process area to audit (pick the one with the most “ask Sarah” moments)
- List 3-5 knowledge holders for that area
- Review existing documentation (wiki, runbooks, training materials) so you know what’s already captured
- Schedule 45-minute interviews with each knowledge holder
- Prepare recording setup (with permission)
Interviews (Days 2-4)
- Conduct interviews using the question framework above
- Note specific decision points, exceptions, and “it depends” answers
- Ask for real examples, not hypotheticals: “tell me about the last time this happened”
- Identify which decisions are high-frequency vs. rare
- Capture the exact phrasing people use to describe their logic
Mapping (Days 5-7)
- Build the decision map (Decision Point / Inputs / Logic / Current Location)
- Group related decisions into clusters (pricing, routing, escalation, etc.)
- Identify the data entities that appear across multiple decisions (these become schemas)
- Identify the decision rules that require judgment (these become skills)
- Flag any contradictions between what different people described
Scoring and Prioritization (Days 8-9)
- Score each item: Frequency (1-3) x Impact (1-3)
- Sort by score descending
- Review the top 5-10 items with the knowledge holders: do they agree on priority?
- Create the encoding backlog with clear owners and target dates
Validation (Day 10)
- Walk the knowledge holders through the decision map
- Confirm: “If an AI agent followed this logic, would it get it right?”
- Note any gaps or corrections
- Finalize the encoding backlog
What Comes Next
The knowledge audit produces two outputs: a decision map and a prioritized encoding backlog. The decision map tells you what your organization actually knows. The backlog tells you what to encode first.
The encoding process (turning those decision maps into schemas and skills) follows a structured workflow. Existing documentation becomes the starting point. The decision rules you surfaced in interviews become the logic layer. The Recursive Loop ensures that each encoded artifact gets tested, refined, and improved through use.
The hardest part is starting. The audit itself is surprisingly energizing. Knowledge holders often appreciate having their expertise recognized and captured. And once you see your business logic laid out in a decision map, the path to encoding becomes clear.
Start Monday. Pick one process area. Interview three people. Map the decisions. Score them. You’ll have an encoding backlog by Friday.
Frequently Asked Questions
How long does a knowledge audit take?
A focused knowledge audit for one department or process area takes 1-2 weeks: a few days of interviews, a few days of mapping, and a day to score and prioritize. You don't audit the entire organization at once. Start with the highest-impact process area and expand from there.
Who should run the knowledge audit?
Someone who understands the process area well enough to ask good follow-up questions, but isn't so embedded that they share the same blind spots. A senior operator from an adjacent team works well. External facilitators can help if internal dynamics make candid interviews difficult.
What if the knowledge holders resist the audit?
Resistance usually comes from fear: that they're being documented for replacement. Address it directly: the audit makes their expertise more valuable, not less. They become the validators who ensure AI agents get it right. Frame the audit as 'let's make sure the system works as well as you do' rather than 'let's capture what you know so we don't need you.'