Explicit vs. Implicit in the Age of Intelligences
Explicit vs. Implicit in the Age of Intelligences
Why the implicit becomes fragile when humans and agents co-produce code.
In the previous article, I defended a simple idea: code is not elegant prose. It is a system, to be judged by its robustness.
But that idea calls for another one.
If code is a system, then it is not enough for it to be pleasant to write, locally fluid to read, or elegant in appearance. Its structures must remain visible. Its boundaries, states, contracts, responsibilities, and effects must be understandable, revisitable, criticizable, and transformable.
In other words: the system must be explicit.
This is where a very contemporary tension appears. For a long time, part of software tooling has valued the implicit: conventions, framework magic, inferred behaviors, invisible structures, shortcuts that let us move faster. This has often been useful. It has sometimes even been very productive.
But in the age of intelligences — human and artificial — the implicit changes in nature.
The problem with the implicit is not only that it is hidden. It is that it is no longer guaranteed to be shared.
The implicit has long been an accelerator
We should not caricature the implicit.
The implicit is not bad by nature. It has made it possible to build more fluid tools, frameworks that are faster to adopt, and more productive conventions. It has made it possible to write less, repeat less, and move part of the work into the framework, the language, or the environment.
This is often what makes a technology pleasant.
A file placed in the right location automatically becomes a route. A function named in a certain way is recognized by a tool. A discreet directive changes the execution location of a piece of code. A naming convention avoids explicit configuration. A React hook makes it possible to reuse logic by connecting to a component’s lifecycle.
All of this can be practical.
And sometimes, this local comfort creates an impression of elegance.
But we need to name what is really happening: part of the system’s structure is no longer in the code itself. It is in the prior knowledge one must possess to interpret that code correctly.
The implicit works well when everyone shares the same context.
It works less well when that context becomes unstable.
The implicit becomes fragile when it is no longer shared
In a stable human team, some implicit knowledge can hold for a long time.
Developers know how the system works. They know the local conventions. They know why one layer must not import another. They know that a given function must only be called in a given state. They know that a given hook is fragile, that a given abstraction hides a side effect, that a given route depends on a file convention.
This knowledge is not always written down. But it circulates through habit, code review, oral transmission, and familiarity with the project.
In the age of agents, this situation becomes more fragile.
Humans and agents do not produce code according to the same internal logic. Humans mobilize concepts, intentions, mental models, and a more or less explicit causal understanding. Agents produce a plausible continuation under constraints, then are brought back into line by the compiler, the type checker, tests, linting, and audit rules.
Both can produce valid code.
But they do not produce it in the same way.
And above all, they do not necessarily share the same implicit knowledge.
An agent can introduce a locally plausible form, compatible with immediate constraints, but unnamed, unstabilized, and unshared by the humans who will later take over the system. Another agent, or another version of the same agent, may infer something else. A human may not recognize the hidden assumption. The code may continue to work, but its collective intelligibility will have decreased.
An unshared implicit becomes an invisible dependency of the system.
And an invisible dependency always ends up being costly.
A probabilistic machine facing a system that must remain stable
An LLM is a probabilistic machine.
It produces a plausible continuation. It works from a context, learned patterns, examples, constraints, and tool feedback. It does not understand code the way a human understands a system they have designed, inhabited, corrected, maintained, and confronted with reality.
And yet the code it produces must join an artifact that cannot remain probabilistic.
The software system must tend toward something stable: defined states, controlled transitions, verifiable contracts, localized effects, observable errors.
There is therefore a central tension: a probabilistic machine participates in producing a system that must remain predictable, testable, and controllable.
The explicit does not remove that tension.
It reduces it.
The more a system relies on implicit knowledge, the more the agent has to infer. The more it has to infer, the more likely drift becomes. Conversely, the more concepts are made visible, the less the agent has to guess what is expected.
The explicit does not cancel the probabilistic nature of the model. It bounds what the model needs to guess.
This is why the explicit is not only a reading aid. It is the shared language of production for a system whose authors do not reason in the same way.
When the producers of code do not reason in the same way, the explicit becomes a condition of coordination.
The explicit creates cognitive portability
There is another important consequence.
When concepts are explicit, the language becomes partly secondary.
Take a simple example: the possible states of a system.
In TypeScript, they can be represented with a discriminated union:
TypeScript
type RecorderState =
| { type: "idle" }
| { type: "recording"; sessionId: string }
| { type: "paused"; sessionId: string }
| { type: "error"; reason: string };In Rust, we would rather write:
Rust
enum RecorderState {
Idle,
Recording { session_id: String },
Paused { session_id: String },
Error { reason: String },
}In Swift:
Swift
enum RecorderState {
case idle
case recording(sessionId: String)
case paused(sessionId: String)
case error(reason: String)
}In Kotlin:
Kotlin
sealed interface RecorderState {
data object Idle : RecorderState
data class Recording(val sessionId: String) : RecorderState
data class Paused(val sessionId: String) : RecorderState
data class Error(val reason: String) : RecorderState
}These are not the same languages.
But they are the same conceptual form.
We are no longer merely reading TypeScript, Rust, Swift, or Kotlin. We are reading a finite set of possible states. We are reading an intention of modeling. We are reading a structure that says: these are the cases recognized by the system.
This is what I would call cognitive portability.
The explicit does not only make code readable. It makes it readable independently of the language.
The implicit, by contrast, travels less well. It is often local to a framework, a convention, a directive, runtime magic, or team knowledge. It can be very effective inside its own context. But as soon as one changes language, framework, agent, team, or era, it becomes more fragile.
The explicit makes forms recognizable.
And recognizable forms make convergence easier.
A grid for analyzing the implicit
We can then propose a simple grid.
To analyze the implicit in real code, we need to cross three dimensions.
The first dimension is the system concepts:
- boundaries;
- contracts;
- validation;
- invariants;
- explicit states;
- responsibility and authority;
- deterministic transformations;
- proof;
- observability.
The second dimension is the layers where these concepts can appear:
- the language;
- the framework;
- libraries;
- application architecture;
- team or project conventions.
The third dimension is their mode of expression: explicit or implicit.
Each time, the question is the same:
Is this concept made visible in the code, or does it rely on a convention, magic, an abstraction, or prior knowledge?
This is not a condemnation of all abstraction.
A good abstraction hides noise.
A bad abstraction hides the system.
The difference is essential.
When an abstraction hides an unimportant detail, it helps us. When it hides a boundary, a state, a responsibility, a side effect, or an invariant, it makes the system harder to understand and easier to drift.
Here is a synthetic version of this grid:
| Concept | Layer | Explicit form | Implicit form | Risk |
|---|---|---|---|---|
| Boundaries | Language | modules, visibility, input/output types, controlled imports | free imports, globals, unchecked conventions | porous boundaries |
| Boundaries | Framework | server/, client/ files, separate handlers, server-only | discreet directive, colocation that masks client/server | forgetting the boundary |
| Contracts | Language | strict types, interfaces, discriminated unions | any, free strings, informal objects | silent assumptions |
| Validation | Framework / library | validation schemas at boundaries, explicit parsing | payload accepted by convention | doubtful data in the core |
| Invariants | Architecture | controlled constructor, bounded domain model, forbidden transitions | mutable objects everywhere, scattered rules | possible inconsistent state |
| States | Language / architecture | state machine, statechart, discriminated unions | scattered booleans, local conditions | illegal transitions |
| Responsibility / authority | Architecture | clear ownership of a resource or transition | several layers can mutate the same thing | unclear authority |
| Deterministic transformations | Business code | pure functions, immutability, clear input/output | hidden side effects | behavior hard to test |
| Proof | Language / tools | typecheck, exhaustiveness, tests, constraints | human trust, conventions | undetected omission |
| Observability | Runtime / architecture | contextual logs, traces, correlation IDs | console.error, generic error without context | impossible to understand production |
The point of this grid is not to be exhaustive. It is mainly to shift the question.
We no longer ask only: “is this code clean?”
We ask: “where are the important concepts of the system visible? And where have they disappeared behind implicitness?”
Example: boundaries in Next.js
Take a contemporary example: Next.js Server Actions.
The problem is not that the tool is bad. Nor is the problem that it forces boundaries to be blurred. It is perfectly possible to define server actions separately, in dedicated files, and to mark the split clearly.
But the framework makes possible, through a directive, a very practical form of colocation between frontend code and a server action. This local comfort can create an impression of elegance.
And yet the boundary has not disappeared.
It has only become more discreet.
Behind this apparent simplicity, there is still a real mechanism: server functions callable over the network, authentication and authorization checks to perform again inside the action itself, captured variables sent and then returned, encryption, dependency on the build.
The framework takes part of this on itself, but the abstraction does not remove the complexity. It shifts it and partially masks it.
In the grid, we could say:
- concept: boundary;
- layer: framework;
- explicit form: clear separation between client and server;
- implicit form: very practical colocation that makes the boundary less visible;
- risk: forgetting the requirements specific to that boundary.
Again, the problem is not the tool.
The problem is how easily local comfort can make us forget a real boundary.
Example: React hooks
React hooks are another interesting example.
They brought something very powerful: the reuse of logic.
Before hooks, React already made views composable. With hooks, it became possible to make local logic reusable too: state, effects, subscriptions, access to the lifecycle, shared behaviors.
It was a real step forward.
But that progress came with a very strong form of implicitness.
Hooks are not ordinary JavaScript. They obey specific rules: constrained call order, impossibility of calling them conditionally, the use prefix, dependencies to maintain in arrays, dedicated linting to flag invalid usages.
Part of the system therefore lives in the implicit rules of the framework.
Of course, we can learn these rules. We can tool them. We can respect them. But we must recognize what they are: prior knowledge required to correctly interpret the code.
And the problem becomes clearer when hooks are used to carry complex behavior.
A few useState, a few useEffect, a few callbacks, a few handlers, and suddenly the component no longer merely renders a view. It hides a small system: states, transitions, effects, resources, invariants, lifecycle.
The view becomes the place where behavior has been mixed in.
The logic is reusable, but it is not necessarily explicit.
Hooks made logic reusable. They did not make it explicit.
Moving logic out of React
One possible direction is not to remove hooks.
It is to put them back in their place.
In this approach, React remains a layer of view and adaptation. The important logic lives in an explicit model outside React.
The model carries:
- states;
- events;
- transitions;
- effects;
- resources;
- invariants;
- the data exposed to the interface.
React merely subscribes to the model and sends it user events.
We could represent the architecture like this:
TXT
Explicit model → Snapshot / ViewModel → React viewThe hook then becomes an adapter, not the main place where logic lives.
For example, for a simplified recording system:
TypeScript
const recorderModel = createRecorderModel({
services: {
recordingApi,
uploader,
createMediaRecorder,
},
});
recorderModel.send({ type: "start" });
recorderModel.send({ type: "pause" });
recorderModel.send({ type: "resume" });
recorderModel.send({ type: "discard" });The model exposes its states and events:
TypeScript
type RecorderState =
| { type: "idle" }
| { type: "recording"; sessionId: string; uploadedSegments: number }
| { type: "paused"; sessionId: string; uploadedSegments: number }
| { type: "saving"; sessionId: string }
| { type: "saved"; recordingId: string }
| { type: "discarded" }
| { type: "error"; reason: string };
type RecorderEvent =
| { type: "start" }
| { type: "pause" }
| { type: "resume" }
| { type: "segment-ready"; blob: Blob }
| { type: "save" }
| { type: "discard" };Then React subscribes:
TypeScript
function useRecorderModel(model: RecorderModel) {
const snapshot = useSyncExternalStore(
model.subscribe,
model.getSnapshot,
model.getSnapshot,
);
useEffect(() => {
model.mount();
return () => model.unmount();
}, [model]);
return {
snapshot,
send: model.send,
};
}And the component no longer carries the main behavior:
TSX
function RecorderScreen({ model }: { model: RecorderModel }) {
const recorder = useRecorderModel(model);
return (
<RecorderView
snapshot={recorder.snapshot}
onStart={() => recorder.send({ type: "start" })}
onPause={() => recorder.send({ type: "pause" })}
onResume={() => recorder.send({ type: "resume" })}
onSave={() => recorder.send({ type: "save" })}
onDiscard={() => recorder.send({ type: "discard" })}
/>
);
}The exact details of the API matter less than the conceptual shift.
The hook no longer hides a combination of states, effects, resources, and implicit rules. It connects React to an explicit model.
Libraries like XState already move in this direction: the logic is externalized into an explicit model, and React mostly serves as a display and adaptation layer.
XState is not the only example. Redux was already moving in that direction: taking transition logic out of components, representing changes as actions, and making state evolve through reducers that can be tested separately. Even useReducer, when used with a reducer defined outside the component, can be a step toward more explicitness. The difference is therefore not between “hooks” and “no hooks.” It is between logic hidden inside the component and logic made visible in an autonomous form: reducer, store, state machine, or explicit model.
The problem is not that a hook calls this logic. The problem appears when the hook itself becomes the place where state, transitions, effects, and responsibilities get mixed tog
So the question is not: “should we use hooks or not?”
The question is: “when does a behavior become too systemic to remain hidden inside hooks?”
What should not be misunderstood
The explicit is not a religion.
This is not about making everything heavy, verbose, ceremonial. It is not about rejecting conventions, frameworks, abstractions, or shortcuts.
A fully explicit but poorly organized system can be unreadable. The explicit is not enough to produce clarity.
We must therefore avoid the wrong conclusion: more code does not automatically mean more robustness.
What matters is making explicit the concepts that truly carry the system.
A local convention can work very well if it does not hide anything essential. An abstraction can be excellent if it hides noise, not a responsibility. A framework can increase robustness if it makes the right boundaries visible instead of diluting them.
The goal is not to naively oppose explicit and implicit.
The goal is to know where the implicit becomes dangerous.
It becomes dangerous when it hides a boundary, a state, an invariant, an authority, a side effect, a security assumption, a validation rule, a business transition, or production behavior.
That is where we need to make visible.
That is where we need to name.
That is where we need to model.
Conclusion — the explicit as a support for convergence
In the age of intelligences, the explicit is not merely a style preference.
It is not merely a way to help humans read code.
It is a condition of coordination between producers of code who do not necessarily share the same implicit knowledge.
A human can read a system with a mental model. An agent can produce a plausible form under constraints. Another agent can produce a different one. A future maintainer can arrive without knowing the history of the project.
What allows them to converge is not a shared implicit.
It is a shared explicit structure.
The explicit makes concepts visible. It reduces the amount of guessing. It makes forms more portable from one language to another. It makes boundaries, states, responsibilities, and effects harder to forget.
It does not guarantee that the system is good.
But it makes one essential thing possible: to take it up again, discuss it, correct it, make it evolve.
The implicit can accelerate.
The explicit allows transmission.
And in a world where humans and agents write together, what cannot be transmitted eventually drifts.
A thought after reading?
If you would like to discuss about this article, you can write to me here. I share because I care and I want to learn. Please teach me with care.