Le secrétaire de Fernand

Code Is a System, Not Elegant Prose

Code Is a System, Not Elegant Prose

We still talk about code as if it were style. But code runs in the world.

Code is not elegant prose. It is a system, to be judged by its robustness.

Too often, we talk about code as if we were talking about style.

We say that a piece of code is elegant, beautiful, refined, clean, sometimes even inspired. We attribute to it qualities that are almost literary, almost artistic, as if writing software were first and foremost an art of form. As if the quality of code were decided mainly by the way its sentences look.

That is a category error.

The problem is not the word elegant in itself. The problem is everything it carries when it is used in a romantic, literary, aesthetic sense: the idea that good code would first and foremost be code admired for its formal poise, its grace, a kind of almost autonomous inner beauty.

And yet code is not a text to be contemplated. Code runs in the world.

It runs in machines, networks, browsers, phones, databases, message queues, distributed systems, human organizations. It encounters latency, outages, incomplete responses, side effects, inconsistent users, imperfect environments, intermediate states, unstable dependencies.

The real world does not applaud the beauty of an abstraction. It punishes the absence of guardrails.

That is why the right vocabulary is not, first and foremost, the vocabulary of elegance. It is the vocabulary of robustness.

One thing must be made clear, however: I am not arguing here against elegance in the sense some engineers mean it — clarity, sound decomposition, the right simplicity. That tradition is valuable. What I am aiming at is another way of talking about code: a more romantic, more aesthetic way, one that judges it first by its appearance rather than by how well it holds up in the real world.

What is often celebrated as “elegant” is not always a structural quality. It is sometimes a quality of local comfort: less code, more fluidity, more colocation, more convention, more implicitness. And yet what is practical to write or pleasant to read locally is not always what makes a system more explicit, more stable, or more transmissible.

Robustness as the real criterion

I use the word robustness here in a broad sense.

Robust code is not simply code that “doesn’t break.” Robust code is code that is:

  • intelligible;
  • resilient;
  • observable.

That definition changes the very way we judge software.

The question is no longer: is this code beautiful?

The question becomes:

  • Can a human understand it without taking a leap of faith?
  • Can a machine reason about it without guessing?
  • Does it absorb failures and deviations from the real world cleanly?
  • Can we know what it did, in what state, and why?
  • Can we take it up again, extend it, audit it, without weakening the whole?

That is where the real shift lies.

We should not judge code first as an aesthetic object. We should judge it as a system exposed to reality.

That does not mean every piece of code requires the same level of formalization everywhere, but that the right compass is not how code looks: it is the level of robustness the situation actually requires.

These tools have a cost as well. A prototype still looking for its problem does not need the same level of formalization as a critical system. The discipline is not to structure everything everywhere, but to know what deserves to be structured, when, and for what risk.

Intelligible, because it must be possible to read it without guessing

Robustness begins with intelligibility. And in this article, what I call intelligible depends first on one property: the explicit. Visible boundaries, clear responsibilities, named states, stated contracts, declared transitions, but also clear names, understandable flows, localized transformations, and stable conventions. A system does not become automatically simple merely because it is explicit. But without explicitness, it forces us to guess — and a system that forces us to guess quickly becomes costly.

But today, that is no longer enough.

Code must also be intelligible to machines.

We are entering an era in which code is no longer read only by developers. It is also read, audited, transformed, completed, explained, and sometimes generated by intelligences. Ambiguous, implicit code, full of unstated conventions, does not merely become painful for a human colleague: it also becomes fragile for an agent. It forces it to guess.

And as soon as a system relies on guesswork, it becomes costly to maintain and dangerous to modify.

Intelligibility is therefore not a luxury of style. It is an operational property. And here, it depends largely on making the system explicit.

Resilient, because the world deviates

Good code is not code that assumes everything will go well. It is code designed in the knowledge that not everything will go as planned.

I use “resilient” here in a broad sense. A resilient system does not merely survive a failure. It absorbs deviations from the real world: latency, incomplete responses, unstable dependencies, replays, temporary unavailability, side effects, partial interruptions, variations in context.

As soon as a system touches the outside world, it encounters that uncertainty: a network response may arrive late, an API may change shape, a user may send absurd data, a database may be temporarily unavailable, an event may be processed twice, a state may be partially updated.

A robust system does not deny that reality. It frames it.

It distinguishes what is reliable from what is not.

It limits the places where side effects occur.

It makes error cases and degraded cases explicit.

It accepts that failure is part of the system’s normal behavior.

Robustness is therefore not an extra layer you add afterward with a few try/catch blocks. It is a way of designing software from the outset.

Observable, because in production we never see everything in advance

Even a well-designed system will encounter cases in production that nobody had fully anticipated.

That is inevitable.

The question, then, is not whether errors will occur. The question is whether, when they do occur, the system will give us the means to understand what happened.

Observable code produces useful traces. It makes important transitions, significant failures, useful contexts, external interactions, abnormal delays, and relevant business states visible.

It does not merely fail. It fails while leaving usable clues behind.

Observability is not a secondary concern reserved for operations or infrastructure. It is a software design concern. A poorly structured system is hard to observe because it already struggles to describe what it is doing. A well-structured system, by contrast, naturally makes it possible to attach traces to boundaries, contracts, validations, state transitions, and domain errors.

Observing a system is not a matter of adding lamps to an obscure machine. It is a matter of building a machine whose mechanisms can be inspected.

What makes code robust

Once we accept that robustness is the real criterion, another family of concepts becomes central.

No longer the concepts of style, but the concepts of system.

And these concepts do not simply sit side by side. They follow one another.

This sequence is an order of exposition, not a strict order of construction. In practice, these dimensions respond to one another, correct one another, and are co-constructed. But to think about them clearly, it is useful to unfold them this way.

1. Boundaries

Everything begins with boundaries.

A robust system must know where it begins, where it ends, and where its zones of uncertainty lie.

Boundaries separate, for example:

  • the internal from the external;
  • the domain from the infrastructure;
  • pure logic from side effects;
  • trusted data from data that is still suspect.

Without boundaries, everything communicates with everything else. Responsibilities blur, causes spread, and errors become hard to localize.

Boundaries are therefore not just there to “organize code well.” They are there to contain uncertainty.

A contemporary example: in web applications, some frameworks make client/server interactions feel very fluid. Next.js Server Actions, for example, do not force us to blur boundaries: they can perfectly well be defined separately, with the split clearly marked. But the framework makes possible, through a simple directive, a very practical form of colocation between frontend code and the server action. That local comfort can create an impression of elegance. And yet the boundary has not disappeared: it has merely become less visible. Behind that apparent simplicity lies a whole real mechanism: server functions callable over the network, authentication and authorization checks that must be performed again inside the action itself, captured variables sent out and then sent back, encrypted, and tied to a given build. The framework takes part of that on itself, but this abstraction does not remove the complexity: it shifts it and partially masks it. The problem is not the tool. The problem is how easily local comfort can make us forget the boundary, its requirements, and the operational burden that still exists behind it.

2. Contracts

As soon as a boundary exists, we need to define what is allowed to cross it.

That is the role of contracts.

A contract states what is expected, what is guaranteed, and under what conditions. It can take several forms: a type, an interface, a schema, a protocol, an explicit convention. The exact form does not matter. What matters is that it reduces ambiguity.

The logic is simple:

boundary → contract

We do not let just anything circulate between two zones of the system.

A system without contracts rests on assumptions. And assumptions fail silently.

3. Validation

A contract that is never checked remains wishful thinking.

That is why contracts call for validation.

As soon as data crosses a boundary, it must be validated. This applies to user input, an environment variable, an API response, an imported file, a message consumed from a queue, a network payload.

To validate is to refuse to pretend that every input is legitimate.

A robust system does not trust too early. It validates at the boundaries. It turns the unknown into the known.

That is especially true when we consume an external system. The mere fact that it advertises a contract never removes the need to verify what actually comes in. An API may drift, be incorrectly implemented, or occasionally violate its own promise. Without validation at the point of entry, we let data into the system that is treated as true even though it is not.

The chain then becomes:

boundaries → contracts → validation

And that chain is essential, because it prevents uncertainty from the outside world from silently contaminating the core of the system.

4. Invariants

Once inputs have been framed and validated, the system can begin to protect its internal truths.

That is the role of invariants.

An invariant is something that must remain true.

For example: a paid order does not become pending again; an authenticated user always has a valid identifier; an object in a given state cannot trigger a given transition; data marked as validated actually conforms to the expected schema.

Invariants give the system anchor points. They reduce the space of possibilities. And reducing the space of possibilities is what makes the system more intelligible and safer.

An invariant is not merely an abstract truth; it is also something that must be protected against the system’s real behavior. If a payment webhook is replayed, for example, the system should not behave as if a brand-new event had arrived. Without idempotency guardrails, the same signal can produce multiple effects and make the state drift instead of stabilizing it.

We can put it like this:

validation protects invariants

5. Explicit states

As soon as a system depends on time, stages, permissions, sequences, events, transitions, or temporality, its state must be made explicit.

As soon as a resource exists over time, it also has a lifecycle. Thinking about that lifecycle explicitly means refusing to let it be scattered across a multitude of local, implicit, or contradictory conditions.

And in practice, making state explicit means modeling it.

We then mobilize conceptual tools such as:

  • state machines;
  • statecharts, when we need to add hierarchy, parallelism, or communication.

Why are they useful? Because they force us to name the possible states, define the allowed transitions, make triggering events explicit, and limit illegal moves.

They replace diffuse logic with a declared structure.

The progression then becomes:

invariants → explicit states → allowed transitions

And that is often where a system really begins to hold together. Not because it becomes more “intelligent,” but because it becomes less blurry.

Take an audio or video recording system. Before anything even starts, the user can choose a camera, a microphone, a background, check a preview. Then they start recording, can pause, resume, turn the camera off, change certain settings, while in the background the system creates a server-side trace, listens to devices, and uploads segments. At the end, they can delete or save. Without explicit states, all of this quickly gets scattered across local conditions and contradictory flags. With a state machine, we name the phases, constrain the transitions, and finally make coherent what the user sees, what the system does, and what the server receives.

6. Responsibility and authority

Once states have been made explicit, another question becomes unavoidable: who is allowed to act on what, at what moment, and with what responsibilities?

Here, the question becomes one of responsibility and authority.

A system becomes blurry when multiple parties can create, modify, transition, or destroy the same resource without a clear rule. By contrast, a robust system knows who owns what in the operational sense: who is responsible for a piece of data, a process, a resource, a transition, a cleanup, a closure.

Lifecycle tells us in which states something may exist over time. Responsibility and authority tell us who may act on that thing, when, and under what conditions.

This point is essential, because an explicit state without explicit authority still leaves room for drift. We then know what may happen, but not who is legitimate to make it happen.

The progression is therefore enriched as follows:

explicit states → responsibility and authority → controlled transitions

7. Deterministic transformations

Once boundaries are in place, contracts defined, validations in place, invariants protected, and states made explicit, another question becomes central: how do we make the system’s local behavior as readable and predictable as possible?

That is the point at which the question of deterministic transformations arises.

Whenever possible, we want transformations whose behavior is stable, understandable, and testable. We want, as much as possible, pure functions, immutability, composition, and isolated side effects.

Not out of doctrinal taste, nor to signal allegiance to a school, but because this makes the core of the system more robust.

Ideas from functional programming are valuable here. They do not replace boundaries, contracts, invariants, or explicit states. They simply make internal behavior more predictable.

A pure function does not, by itself, constitute a system invariant. It does, however, provide a local unit of predictability. Same input, same output, no hidden side effects. And those local units of predictability are extremely valuable when one wants to build a whole that is readable, testable, and reliable.

In practice: at the boundaries, we validate; in the core of the system, we seek purity as much as possible; as soon as time, permissions, or transitions matter, we make state explicit.

That search for local determinism does not replace system thinking. It complements it.

And it naturally prepares the next step: proof.

8. Proof

Once boundaries are in place, contracts defined, validations in place, invariants protected, and states made explicit, one question remains: what still depends solely on human discipline?

The question of proof then appears.

In software, proof is not always formal in the mathematical sense. But there are several degrees of proof:

  • types;
  • schemas;
  • tests;
  • exhaustiveness;
  • compile-time constraints;
  • the structural impossibility of certain states or certain transitions.

The central idea is simple: everything that can be guaranteed structurally should not rest solely on habit, caution, or a team’s memory.

We can summarize it this way:

contracts + invariants + explicit states → forms of proof

The more a system replaces implicit discipline with explicit guarantees, the more robust it becomes.

An important nuance must be added, however: proof here is not limited to the most common and accessible forms of structural guarantee. In some contexts, it can go as far as formal verification, using languages, proof assistants, and methods capable of demonstrating that a program satisfies certain specifications. This path remains demanding and uncommon in day-to-day development, but its existence reminds us that “proof” is not merely a convenient metaphor. In some cases, it can become a real technical objective.

We can also see this in more ordinary forms of proof. Between a state stored as a simple string and an exhaustive type that forces the compiler to handle every case, the difference is clear: in one case, we hope; in the other, we make certain omissions structurally impossible.

9. Observability, as the extension of the system into the real world

And finally, everything that has been structured conceptually must be made visible when it is actually running.

Observability returns here, no longer as a simple general criterion, but as the concrete extension of the system’s entire architecture.

A well-structured system makes it possible to observe:

  • what enters through its boundaries;
  • what is accepted or rejected by validation;
  • which contracts were violated;
  • which invariants were protected or threatened;
  • what state it is in;
  • which transition took place;
  • which external interaction failed;
  • which abnormal delays or behaviors were encountered.

Observability is not something added at the end like technical decoration. It is attached to the very structure of the system.

The full logic then becomes:

boundaries → contracts → validation → invariants → explicit states → responsibility and authority → deterministic transformations → proof → observability

At that point, we are no longer talking merely about organized code. We are talking about a system that can be understood, constrained, evolved, and inspected.

The difference becomes obvious very quickly in production. A log that only says “500 error” signals a problem, but rarely helps us understand it. A log or trace carrying a correlation ID, the current business state, the attempted transition, and the rejected input is already telling part of the system’s story. Useful observability is not about producing more noise, but about making the system’s behavior intelligible when it deviates.

In the age of intelligences, blur becomes more costly

In the age of intelligences, system thinking becomes even more important. The issue is not merely that agents produce code. The issue is that this code must be able to circulate between several forms of intelligence: human and artificial today; often intertwined tomorrow. We do not merely want a machine to write code. We want it to write a system that a human can reread, understand, critique, and validate. And conversely, we want humans to produce structures that machines can follow without guessing.

When several humans and several agents intervene in the same code, the absence of boundaries, contracts, invariants, explicit states, and clear authorities opens the way to every kind of drift: drift of meaning, drift of architecture, drift of behavior. The code may continue to function, but it gradually stops forming a system. Conversely, a well-structured system does not merely make maintenance easier: it contains distortions, makes deviations visible, and allows transformation to remain controlled.

For a long time, a small team could compensate for blurry code through familiarity. Someone knew “how it really works.” An implicit, collective, more or less artisanal understanding made it possible to patch over the gaps.

That era is eroding.

Today, code circulates between more actors: developers, reviewers, newcomers, analysis tools, pipelines, assistants, agents. Every zone of ambiguity becomes a multiplied cost. Every implicit convention becomes heavier debt. Every porous boundary, every missing contract, every poorly modeled state reduces the ability of a human or an agent to intervene safely.

There is another important shift as well. For a long time, some forms of modeling seemed too costly to write, maintain, or even justify. We settled instead for more implicit, lighter, sometimes blurrier forms. With the productivity gains brought by intelligences, that trade-off changes. Implementation cost drops in many cases. What becomes more costly instead is understanding, validation, testability, and control of drift. As a result, more explicit structures — which might have seemed too heavy yesterday — become good investments instead, because they make the system more readable, more controllable, and more transmissible.

Such code is easier to take over, audit, instrument, and transform, not because it is “prettier,” but because it exposes its logic and reduces the amount of guesswork.

And this is perhaps one of the great shifts of our time: blurry code is no longer merely difficult to maintain. It is also much harder to evolve properly with intelligences.

Conclusion

Code is not elegant prose. It is a system, to be judged by its robustness.

And robustness here is not a vague word. It refers to something precise: code that is intelligible, resilient, and observable.

To get there, we need to think less in terms of style and more in terms of structure: make the system more explicit, set boundaries, define contracts, validate at points of contact, protect invariants, make states explicit, clarify responsibilities and authorities, build forms of proof, and attach observability to the architecture itself.

If we still want to speak of elegance, then we need to shift our gaze. What deserves admiration is not code that impresses. It is code that forms a system.

Elegance is a vocabulary of the surface. Robustness is a vocabulary of the system.

Further reading

  • David Parnas — on modularity and boundaries
  • Bertrand Meyer — Design by Contract
  • David Harel — on statecharts
  • Eric Evans — Domain-Driven Design
  • John Hughes and Philip Wadler — on the functional tradition
  • Charity Majors, Liz Fong-Jones, George Miranda — on modern observability

A thought after reading?

If you would like to discuss about this article, you can write to me here. I share because I care and I want to learn. Please teach me with care.