
The Context Window Is an Architectural Constraint
The context window of language models is limited. This is not a technical detail — it is an architectural constraint that shapes how AI-assisted development should be structured.
In multi-repo environments, this constraint becomes visible quickly. An AI agent working within one repository doesn’t see what happens in another. It doesn’t understand how a frontend function links to a backend endpoint. It doesn’t know which parts of the system are related.
This is not the model’s fault. It is a lack of context.
Shared Context Must Be Built
At one client, several approaches were tried to solve this. After a few variations, the conclusion was clear: a shared context is necessary. In practice, this means shared files — such as prompt.md and instructions.md — that describe how the different parts of the system relate to each other.
At its simplest, this means being able to link a frontend action to its corresponding backend endpoint. Nothing complex, but critical. Without this linkage, the agent doesn’t know what to change when a modification is needed.
How much detail to include is a question in itself. But even a minimal mapping helps the model answer what needs to change and which things are connected.
Workspace Structure
In practice, this is implemented by having one parent folder or workspace that contains the repositories as subdirectories. One of those folders can hold instructions and commands, or the parent folder can be its own project with Git submodules.
Automation can be built around this structure: extracting backend endpoints automatically from an OpenAPI description, navigating into the frontend to see what is called where, or using an LLM tool and the browser to map out frontend functionality.
A dedicated command or workflow for updating and consolidating the description keeps it current. Helper tools for handling multiple linked merge requests across repositories reduce friction further.
Project-Specific Description
The end result is a description — for example in YAML or JSON — of what happens across the system when a user presses a button in the UI. This file, combined with general instructions and a project-specific context document, forms the basis for creating a baseline plan when starting any change.
This is not a one-time effort. The description must be updated as the architecture evolves. But updates can be handed directly to the model and callable scripts — no manual maintenance required.
Once this structure is in place, the agent starts functioning properly. It sees the whole system. It understands dependencies. It knows what to change when a feature needs to change.
Context Defines What the Agent Sees
The effective context window of language models is constrained — and this won’t change in the near future. You can give the model more tokens, but what it can truly utilize remains limited.
That is why building context is not optional. It is a prerequisite for an agent working in a multi-repo environment without constantly stumbling over blind spots.
Don’t wait for the model to solve this. Build the context for it.
For how context isolation applies at the individual task level, see Why TDD and AI Need Separate Contexts. For the broader pattern of how AI amplifies your current engineering practices, see AI Is a Multiplier of Your Current State.
Bytecraft helps engineering teams build AI-assisted development workflows that work with the constraints of language models rather than against them. Explore our consulting services to learn how we approach this in practice.




