What AI Copilots Actually Need to Work on IBM i
STRATALIS — Practical modernization for real-world enterprise systems
The question isn't whether AI can handle RPG code. It's whether your IBM i environment is ready for AI.
When AI coding tools come up in an IBM i organization, the reaction is rarely simple.
Leadership sees developer productivity gains and wants to move fast. Developers wonder whether it is even technically viable on a platform this old. And underneath both reactions is a quieter concern that nobody states directly: if we point an AI at this codebase, what are we actually exposing?
These are all legitimate reactions. But they are all responses to the wrong question.
The conversation in most IBM i shops starts with which AI tool should we use? It should start with something more fundamental: is this environment actually ready for AI assistance at all?
In most cases, the honest answer is not yet. And the reason has nothing to do with the AI tools. It has everything to do with what those tools expect to find when they arrive.
Why AI Struggles With RPG Codebases
The first thing most IBM i shops discover when they try an AI coding tool on an RPG codebase is that the generated output looks convincing and fails to run.
Visually it reads like it should work. The structure looks familiar. But when it hits the environment, something breaks — syntax issues, missing context, logic that doesn't account for how ILE programs actually call each other or how service programs expose their interfaces. In practice, the debugging overhead can exceed the productivity gain, which means the tool has made the developer slower, not faster.
The natural conclusion is: the AI isn't good enough yet for IBM i. It needs more training. Better prompts. A more specialized model.
That conclusion is wrong.
A Context Problem, Not a Capability Problem
The real issue is not what the AI knows. It is what the AI can see.
General-purpose AI tools are built on a set of assumptions about how codebases are structured: files with navigable relationships, version history that reveals intent, documentation that makes logic explicit, a system-wide view that connects components to each other.
IBM i environments rarely provide any of this.
RPG source code lives in source members inside source physical files, organized by language type — QRPGLESRC, QCLSRC, QDDSSRC — not by the business function they serve. Program-to-program relationships are not declared in source. Service program dependencies exist as binding directory entries and object references, invisible to anything reading the source alone. Business logic is encoded in RPG indicators, nested conditions, and logic chains that span multiple programs with no map connecting them.
The codebase is not a navigable structure. It is a collection of source members that only makes sense to people who already know how the system works.
This is not a training data problem. It is not a model capability problem. It is a structural mismatch between what AI tooling expects and what IBM i environments provide.
AI assumes context. IBM i source environments have not provided it.
What AI Tools Actually Expect
Every AI coding tool — regardless of vendor or architecture — operates on the same fundamental expectation: that the code it is working with exists in a structured, connected, navigable environment.
IBM i shops have traditionally managed source in source physical files. This is not a minor difference in tooling preference. Source physical files and Git repositories are fundamentally different organizational concepts. A source physical file organizes members by language type. A Git repository organizes code by function, relationship, and history. AI tools are built for the second. Most IBM i environments provide only the first.
AI tools perform reasonably well on understanding tasks even in fragmented environments — reading an RPG program and explaining what it does, summarizing a CL procedure, documenting a service program's interface. These tasks require reading, not constructing. They can work from partial context.
Generative tasks are different. Producing new RPG code, refactoring a program that calls multiple service programs, modifying logic that enforces rules across several source members — these require the AI to reason about the broader system. Without that structure, the AI is guessing. It may guess well enough to look convincing. It will rarely guess well enough to work correctly in an ILE environment.
IBM i environments are not designed as navigable codebases. AI tools are built on the assumption that they are. That gap is the real barrier to adoption — and it is architectural, not technical.
What AI Readiness Actually Requires
Before asking which AI tool to use, IBM i shops need to ask whether the prerequisites are in place for AI tooling to function at all.
In most shops, they are not. Not because teams have failed at anything — but because these prerequisites were never required before, and no one has been explicit about what they are.
Source control first. RPG and CL source must exist in structured, versioned repositories — not source physical files. Not because Git is the industry standard, though it is, but because AI tools are built on the assumption of navigable, historical codebases. Source members in QRPGLESRC are not an equivalent structure. They are a different concept entirely, and assuming equivalence is one of the primary reasons AI adoption in IBM i shops stalls before it starts.
Explicit context. The business logic buried in RPG programs, the rules encoded in service programs, the processing happening in batch jobs — this knowledge needs to exist somewhere outside the code itself, in a form that AI tools can consume. Inline documentation, structured comments, naming conventions that carry meaning — all of it becomes context that improves AI output quality. IBM i shops with no machine-readable context will get generic output. Shops that invest in making their context explicit will get genuinely useful assistance.
System visibility. Which programs call which service programs. Which batch jobs depend on which physical files. What the object dependency chain looks like across the system. AI cannot reconstruct these relationships from source members alone. ILE binding relationships, call chains, and data dependencies must be explicit before AI assistance becomes reliable.
A workspace model, not a member model. AI tools operate across projects — holding context across components, tracking relationships, and reasoning about systems. IBM i development has traditionally centered on the individual source member opened in an editor. That model cannot support the kind of cross-program reasoning that makes AI assistance powerful.
The Real Question
The IBM i community is asking whether AI copilots can work on RPG codebases. That is the right conversation to be having.
But the answer is not a simple yes or no. It is: yes, if the foundational work has been done. No, if it hasn't — and most IBM i shops haven't done it yet.
AI is not the foundation. It is the top layer. The productivity gains are real — for shops that are structurally ready to receive them. Readiness requires system-level changes. No amount of better prompting closes the gap between a source physical file and a navigable, context-rich repository.
The shops that will get real value from AI are the ones that treat source control, explicit context, system visibility, and workspace readiness as infrastructure investments — not prerequisites to get to eventually, but the work that has to happen before anything else.
The AI is ready for IBM i. The question is whether IBM i is ready for the AI.