In software development companies, the rise of coding agents has often led to the question of how to deal with generated code:
- More code is being generated, and there is often a lack of time for reviews – how can this be solved?
- Does generated code need to be reviewed at all?
- Who is actually responsible for the changes in the code?
I see this quite pragmatically: The fundamentals of good software development do not change just because the tools become more powerful.
Who bears the responsibility?
The author of the code remains the person who "operates" the coding agent. He or she is fully responsible for the result. Just because an AI types the lines does not absolve the developer of the duty to understand the code and vouch for its correctness.
Coding agents are tools – very powerful ones, indeed, but in the end, it is the result that we as professionals check into the codebase that counts.
Quantity over Quality?
AI often produces a larger amount of code in less time. Here we must critically examine: Is this code really necessary or obsolete?
Is real value being created here, or was the fifth redundant test case just generated? I have seen test cases implementing Assert.True(true) just so there are "more tests". Such code must be ruthlessly removed. Quantity is not a quality attribute, and "Vibe Coding" must not lead to us flooding our repositories with boilerplate and useless code.
Code Reviews as Knowledge Transfer
Code reviewing, in particular, is an essential vehicle for knowledge transfer within the team. The review must be given at least as much attention as the development itself.
I am aware that the reality of everyday project life often looks different and reviews are perceived as tedious. But here, too, generative AI can help to structure and supplement the review. A prompt like "Take a look at this and that aspect of PullRequest #123" can help the AI perform a preliminary analysis. This gives me a starting point to efficiently review even a large PR without getting lost in details. The AI does not replace the review; it assists with it.
Clarity from Leadership
Scrum Masters, project managers, architects, and management have the task of bringing clarity to statements like "We need to use more AI" and "AI can help us".
It must be clear what is meant by this: Not replacing code reviews, junior developers, and everything else that supposedly only costs money in controlling. What must be meant is the support of development AND code reviews for every developer. AI should enable us to work better, not rationalize away quality assurance.
The Duty of Developers
It is the duty of us developers to take responsibility for our own code changes – whether handwritten or generated. We must actively demand code reviews and give each other enough time for them to take place. Only in this way can we achieve full impact in the team and prevent technical debt from exploding due to uncontrolled AI code.
TL;DR
Coding agents increase the pace and shift the perspective even further towards creating code. It is all the more up to developers to correct this perspective and make sufficient room for code reviews. Management, Scrum Masters, and architects must grant this space and signal clarity regarding the use of coding agents.
Need Support?
Do you want to introduce coding agents in your company but are unsure how to structure processes and reviews? We are happy to help! Just contact us via our contact page and we will see together how we can make your team fit for the future.
