Le secrétaire de Fernand

AI Makes It Absurd to Keep Engineers Away from Reality

AI Makes It Absurd to Keep Engineers Away from Reality

When the cost of producing code falls, value shifts toward understanding the real problem.

For a long time, software was limited by a very simple constraint: producing code was expensive.

It took a lot of time to turn an idea into an interface, an interface into business logic, business logic into a reliable system, and then that system into a product usable in production.

That constraint shaped companies.

Roles were separated.

Functions were specialized.

Filters multiplied between the customer and the person actually building the product.

The customer talks to someone.

That person reformulates.

A product manager prioritizes.

A designer materializes.

An engineer implements.

Then the software reaches production.

That model made sense. When the cost of production is high, you need to protect production capacity. You need to avoid engineers being interrupted, scattered, pulled in every direction. Their time is scarce, so they are often placed at the end of the chain.

But AI changes that constraint.

It does not eliminate engineering work. It does not replace judgment. It does not make design easy. But it sharply reduces the marginal cost of producing code. It makes it possible to generate, modify, test, explore, and compare solutions much faster.

And when the cost of production falls, the central question changes.

The question is no longer only:

How do we produce more software?

It becomes:

How do we produce software that is more right?

The GitLab Signal

GitLab’s recent announcement is interesting because it makes this shift visible.

In its “GitLab Act 2” announcement, GitLab explains that it wants to restructure the company for the agentic era: a flatter organization, up to three layers of management removed in some functions, R&D reorganized into around sixty smaller teams with end-to-end ownership, and new operating principles centered on velocity with quality, ownership, and customer outcomes. GitLab also states that software will increasingly be built by machines directed by humans, but that humans keep the essential judgments: architecture, deep understanding of the customer problem, and taste-based tradeoffs.

That is exactly the important point.

The message is not only: “we are going to automate more.”

The deeper message is:

If machines can produce more, then humans must move closer to the places where their judgment really matters.

And one of those places is the understanding of the real problem.

The Underuse of Software Engineers

In many organizations, software engineers are still used as an implementation force.

They receive tickets.

They receive specifications.

They receive mockups.

They receive decisions that have already been made.

Then they translate all of that into code.

But this massively underuses their value.

A good software engineer does not merely “write code.” They understand systems. They see hidden dependencies, implicit states, fragile invariants, misplaced boundaries, abstractions that will break, and responsibilities that are poorly distributed.

They know that a product solution is not only a visible feature. It is the transformation of a real problem into an executable system.

And to perform that transformation well, they need to understand the problem.

Not only the ticket.

Not only the user story.

Not only the roadmap.

The problem.

The Doctor Without a Patient

There is a simple comparison.

Asking an engineer to build software that solves a customer problem without putting them in contact with that problem is like asking a doctor to treat a patient without ever seeing them.

We would find that absurd.

A doctor needs to listen to the patient.

They need to observe the symptoms.

They need to distinguish what is said, what is lived, what is measurable, and what is interpreted.

They need to ask questions.

They need to check whether the treatment actually has an effect.

Yet in software, we often accept the opposite.

We keep engineers away from reality.

We give them summaries.

We give them solutions that have already been formulated.

We ask them to treat a problem they have never observed.

That could be justified when engineering time was almost entirely absorbed by production. But if AI frees up part of that time, then it should be reallocated where it creates the most value.

And the obvious place is contact with reality.

The Engineer as Translator Between Reality and the Machine

There is an amusing paradox in today’s discourse about AI.

For a while, we heard a lot that software engineers would be replaced. Since AI can write code, the engineering role would become less important.

I think the opposite is probably going to happen.

AI does not remove the need for engineers. It increases the importance of engineers who can understand both what humans say and what the machine produces.

On one side, there are humans: their problems, constraints, habits, frustrations, contradictions, and sometimes poorly formulated requests.

On the other side, there is the machine: generated code, proposed architectures, agents capable of executing, systems that can produce very quickly but do not always know why they are producing.

Between the two, someone has to understand both languages.

That is where the best engineers become central.

They understand reality well enough not to build a solution disconnected from the ground.

They understand the system well enough not to accept naively what the machine generates.

They know how to transform a human problem into a technical system.

They know how to review, guide, correct, and constrain the output of AI.

They know how to distinguish code that works from a system that holds.

In this world, value is not only in the ability to write code. It is in the ability to connect:

Code

human problem
  -> mental model
  -> architecture
  -> generated or written code
  -> production
  -> feedback from reality

That is probably why very good engineers will not become less important.

They will become more important.

Not all engineers. Not those who only execute tickets or mechanically produce code. But those who can understand, model, arbitrate, guide the machine, and bring the system back toward reality.

AI does not replace that skill.

It makes it more visible.

Why Developer Tools Have Innovated So Much

There is one domain where this short loop has always existed: tools for developers.

It is probably no accident that innovation has been so strong there.

Developers had the problems themselves. They directly felt the friction: a build that was too slow, a deployment that was too fragile, a painful API, a framework that was too heavy, an error that was hard to diagnose, an architecture that was drifting, a feedback loop that was too slow.

And above all, they had the ability to build the solutions.

The problem and the power to solve it were in the same place.

That is an extremely favorable situation for innovation. There is little translation, little distance, little loss of information. The person suffering from the problem can formulate a hypothesis, build a tool, use it immediately, observe what works, and then improve it.

Code

felt friction
  -> tool built
  -> direct use
  -> immediate feedback
  -> fast iteration

That is a much shorter loop than in most other software industries.

In many B2B or domain-specific products, the engineer is far from the problem. They receive a translated version of reality: a ticket, a spec, a prioritized request, a mockup. The problem has already been interpreted several times before it reaches them.

With AI, that distance becomes less justifiable.

If engineers can produce faster, then they can spend more time where innovation truly begins: in direct contact with problems. They can observe users, understand situations, spot frictions, and then quickly turn that understanding into a system.

In other words, part of domain software could be rewritten with a loop much closer to the one that made developer tools so fertile.

Not because every domain will become simple.

Not because intermediaries will disappear.

But because the distance between those who live the problem and those who build the solution can shrink.

And that reduction of distance is probably one of the major organizational effects of AI.

The future of software will not only be more productive. It could be closer to reality.

The Real Full Stack

For a long time, we used the expression “full stack” to describe someone able to work across the frontend, backend, database, and sometimes infrastructure.

But that definition is becoming too narrow.

In the age of AI, the real full stack is not only technical.

The real full stack goes from the customer to production.

Code

customer
  -> problem
  -> understanding
  -> design
  -> system
  -> code
  -> production
  -> observation
  -> iteration

That is the complete loop.

A truly useful engineer does not merely own several technical layers. They own part of that loop. They understand why the system exists, what it must transform, how it fails, how it behaves in production, and how users actually react.

This is not only a product engineer in the classic sense.

It is an engineer from problem to system.

Or, to keep a more natural expression:

a full-cycle product engineer.

But with a stronger definition:

A full-cycle product engineer does not merely own part of the technical stack. They own a complete loop: customer, problem, solution, system, production, observation, iteration.

The Product Manager Role Does Not Disappear

This does not mean that product managers, designers, customer success teams, or sales become useless.

That would be the wrong conclusion.

The point is not to replace a chain of specialists with engineers doing everything. The point is to reduce unnecessary filters between the people who understand reality and the people who build the system.

In some contexts, the product manager remains essential. They structure strategy, arbitrate priorities, synthesize multiple signals, understand the market, and work on positioning, distribution, and business constraints.

The designer remains essential. They work on perception, usage, attention, legibility, and the concrete experience.

Customer success remains essential. It observes real usage, blockers, frustrations, and adoption conditions.

But the engineer should not be only the final technical link in that chain.

They must participate earlier.

They must hear some problems directly.

They must observe the consequences of what they build.

They must be able to connect a technical decision to a customer effect.

Otherwise, we lose a huge part of their intelligence.

What AI Makes Possible

AI reduces part of the time spent on mechanical production.

Writing boilerplate.

Creating a first version.

Generating variations.

Refactoring a structure.

Producing tests.

Exploring multiple options.

Documenting.

Auditing.

Comparing.

None of this becomes free. But it becomes faster.

The wrong reaction would be to use all that gain to produce even more features, even faster, with even less understanding.

That would simply accelerate the wrong loop.

The right reaction is different:

use the time freed up to understand better, design better, and verify better.

That means more time with users.

More time observing real workflows.

More time understanding exceptions.

More time clarifying business invariants.

More time instrumenting production.

More time closing the loop between problem, solution, and outcome.

AI should not only increase the throughput of code.

It should increase the quality of the learning loop.

Software Is a System, Not an Answer to a Ticket

Robust software is not an accumulation of features. It is a system.

It has boundaries.

It has contracts.

It validates what comes from the outside.

It protects invariants.

It makes its states explicit.

It clarifies responsibilities.

It transforms data deterministically.

It provides proof.

It becomes observable.

But these elements cannot be designed well if the real problem is poorly understood.

A technical boundary often corresponds to a business boundary.

A software contract often corresponds to a promise between two parts of the system.

An invariant often corresponds to a rule of reality that must never be violated.

An explicit state often corresponds to an important moment in the user lifecycle.

Useful observability often corresponds to a business question we will want to ask later.

If the engineer never sees reality, they cannot model these things well.

They can produce clean code.

They can follow patterns.

They can respect an architecture.

But they risk building a technically coherent system around a poor understanding of the problem.

Small Teams Have an Advantage

This thesis is especially important for small organizations.

In a large company, there are often organizational reasons to specialize roles heavily. There is political, commercial, contractual, and regulatory complexity. Not everything can be flat. Not everything can be direct.

But in a startup, a small product team, or a software studio, keeping engineers away from the customer is often a pure loss.

A small team can work differently.

It can have engineers who talk to users.

It can build quick prototypes.

It can ship early.

It can directly observe usage.

It can correct things without waiting for three validation cycles.

It can keep a short loop between problem and system.

That is probably one of the major advantages small organizations have in the age of AI.

They do not only have less organizational debt.

They can also build more direct loops.

And in a world where producing code becomes easier, the quality of the loop becomes a strategic advantage.

The Risk: Producing More Noise

There is, however, an obvious risk.

If AI makes it possible to produce faster, many companies will simply produce more.

More features.

More screens.

More automations.

More dashboards.

More workflows.

More complexity.

But producing more does not necessarily create more value.

It is entirely possible to use AI to accelerate the production of software that is useless, poorly targeted, poorly integrated, poorly observed, or that simply shifts complexity onto the user.

That is probably one of the great dangers of the current period.

When the cost of production falls, discipline must increase.

Otherwise, we create software inflation: more code, more surface area, more maintenance, more confusion, but not necessarily more real problem solving.

The central question must therefore remain:

What real problem does this system solve, for whom, in what situation, with what observable effect?

A New Allocation of Engineering Time

The time gained through AI should not automatically be reinjected into more production.

It should be reallocated.

Less time on mechanics.

More time on reality.

Less time executing tickets.

More time understanding situations.

Less time producing screens.

More time verifying that those screens change something.

Less time maintaining accidental abstractions.

More time designing systems that correspond to the problem.

This shift is not only technical. It is organizational.

It requires trusting engineers.

It requires exposing them more to users.

It requires reducing handoffs.

It requires measuring customer outcomes rather than internal activity.

It requires treating engineering as a function of understanding, not only as a function of production.

The Thesis

The thesis is simple:

When AI reduces the cost of producing code, keeping engineers away from reality becomes absurd.

Not because engineers should do everything.

But because they are among the best placed to transform a real problem into a robust system.

If we keep them away from the customer, we deprive them of the raw material of their judgment.

If we reduce them to implementation, we waste their ability to model.

If we give them only tickets, we prevent them from building the right systems.

AI does not make engineers less important.

It makes more visible what has always been their real value: understanding a problem, designing a system, making it exist in production, observing what it produces, and then improving it.

The interesting future of software is therefore not only one where engineers write faster.

It is one where they are finally placed in the right place:

close to the problem,

close to the customer,

close to the system,

close to reality.

A thought after reading?

If you would like to discuss about this article, you can write to me here. I share because I care and I want to learn. Please teach me with care.