Every kernel rule must survive a model swap. If it only works because of Claude-specific training, it does not belong.
The portability test is the filter Arkeus applies to kernel content. Before any rule, file, or protocol lands in the always-on kernel, the question gets asked: would this still work on Gemini 3, GPT 6, Llama 5? If the answer is no — if the rule only works because Claude's specific RLHF lineage has absorbed a particular pattern of constitutional training — the rule does not belong in the kernel.
Portable things pass the test. Corrections written contrastively are portable: the pattern of incident-plus-rule lands on any frontier model. Voice rules expressed as cut-X-add-Y are portable: they describe the transformation, not a quirk of any one model's output distribution. XML-tagged system reminders are portable: nearly all frontier models parse them. Provenance frontmatter is portable because it is just metadata.
Non-portable things fail. Prompts that exploit Claude's specific refusal patterns fail. Constitutional-AI phrasing that assumes RLHF-trained priors fails. Self-reference in the style of as Claude I would fails. Anything whose behavior depends on the specific quirks of the current vendor fails.
The kernel is Claude-tuned today on purpose, because Claude is the active runtime and tuning for the runtime is cheaper than pretending it does not exist. But every kernel file ships with the understanding that it has to be translatable. When the inevitable port to a non-Claude model happens, the north star goes first and unchanged. Everything else gets rewritten in the target model's idiom, but the underlying rules stay the same.
The portability test is the hedge against vendor lock-in. It is what makes epistemic sovereignty actually hold up. A kernel that only works on one vendor is not a sovereign kernel — it is a leased one.