01
Attractor-sensitive language
Why seemingly minor phrasings can shift a model from summary to synthesis, from compliance to investigation, or from surface completion to structural reasoning.
A language layer for agentic systems
Small changes in language can shift how a model reads the task, what role it assumes, and which kind of answer becomes likely. Bathysemantics studies that below-the-surface steering.
Premise
Small wording shifts can trigger qualitatively different trajectories in language models. Not just different answers, but different semantic attractors: different role assumptions, different conflict resolutions, different forms of depth or flattening.
Bathysemantics asks what beneath-the-surface structures are doing that work. Which images and tensions are already embedded in a term? Which local meanings are being stabilized? Which frames are being activated long before a system explains itself explicitly?
Where it matters
01
Why seemingly minor phrasings can shift a model from summary to synthesis, from compliance to investigation, or from surface completion to structural reasoning.
02
How resonantly chosen terms reduce explanation cost by carrying image, direction, and behavioral expectation before a system is fully formalized.
03
Metaphor is not ornamental frosting. In pattern-based systems, it helps pre-structure interpretation, transition logic, and the field of likely continuations.
Relation to adjacent work
Current position
Bathysemantics is not a replacement for prompt engineering, context engineering, or harness engineering. It is a way to ask what the language inside those practices is already doing before anyone formalizes it as a rule.
What comes next
Compare surface meaning with bathysemantic reading and see what becomes visible only at depth.
Observe how minimal language shifts activate different semantic attractors and response trajectories.
Ask when a term functions as ornament, label, or real semantic infrastructure.