AI in development is no longer an add-on — it is becoming a base layer
For a few years, AI in development meant one thing: code completion. Ready-made patterns. A minute saved here, a useful hint there.
That phase is over.
In 2026, the shift is not that AI writes code faster. It is that AI no longer stops at writing code. It now works through the full path of a change: collecting context, proposing a plan, editing code, reviewing proposed changes, flagging security and quality risks, suggesting fixes, and handing the work into whatever automated process prepares it for release.
Once AI touches multiple stages of a change, it stops being a convenient extra. It becomes part of how the team actually works.
GitHub made this visible in late 2025, describing AI, agents, and typed languages as forces
driving the biggest shifts in software development in more than a decade.
What matters there is not the phrasing. It is what the phrasing reflects: AI is no longer treated as a side tool. It is increasingly treated as part of the engineering foundation.
Around the same time, GitHub's Copilot code review system changed. It moved away from a model where AI only leaves comments, toward something that combines LLM analysis, tool calls, and rule-based checks that produce consistent results. GitHub describes it as combining
detections from large language models, calls to external tools, and consistent rule-based checks through tools such as ESLint and CodeQL.
AI is not just suggesting text anymore. It is working alongside the tools teams already use to inspect code.
The same shift shows up in GitHub's Copilot cloud agent, which can inspect a repository, build an implementation plan, make changes in a separate branch, and prepare them for review. GitHub calls it
an autonomous and asynchronous software development agent.
So AI is no longer waiting in the editor for the next prompt. It can take a scoped task, work through several steps, and return the result into the team's normal review flow. That is what "agentic workflows" actually means — not the label, but the shift: from reacting to a single instruction, to participating in several connected ones.
Once that happens, the questions teams ask have to change too.
"Should we use AI?" stopped being interesting. In 2026, even "which model do we prefer?" is already the wrong level. The more useful question is how well AI is woven into change review, testing, release prep, security, and quality control.
Because once AI works through the full path from draft to finished change, speed stops being what you optimize for. Control does.
Without control, AI accelerates noise.
A mediocre requirement becomes a neatly packaged change faster. A fragile architectural decision gets embedded earlier. A weak assumption survives longer because building a quick prototype to test an idea has become cheap. A review process that was already missing things misses them even faster.
This is why the strongest current research does not describe AI as something that automatically matures engineering. DORA's 2025 findings used one word:
an amplifier.
AI strengthens good systems and magnifies problems in weak ones. Teams with solid delivery discipline, real code review, and clear standards get more out of it. Teams with weak foundations do not get more maturity. They get faster instability.
The practical consequences are not abstract. They run through the actual tools and habits teams use every day.
Code review becomes a mixed system — AI surfaces issues, rule-based checks confirm or reject them, people decide what is genuinely risky. The job shifts from reading everything to managing signal and escalation. CI/CD can no longer just run a build: if AI can create changes, patch them, and hand them forward, the pipeline has to check intent, test behavior, and catch the kinds of mistakes a model can make while sounding completely confident. Repository quality starts to matter as an input, not just a byproduct — the better the documentation, architectural boundaries, and tests inside a project, the less chaos AI introduces when it operates autonomously.
And what makes a strong developer is shifting. The best people stand out not only because they ship faster, but because they are better at knowing what to hand to AI, what to constrain, what to verify by hand, and where their own attention still has to stay sharp.
GitHub described the direction plainly:
Copilot used to be an autocomplete tool. Now, it is a full AI coding assistant that can run multi-step workflows.
Once AI participates in the system rather than assisting inside the editor, teams need a different level of discipline: clear rules in the repository, strong automated tests, a reliable review process before changes merge, security tools inside the delivery path, documented architectural limits, and explicit policies for what AI is allowed to do autonomously.
None of this is about whether AI belongs in software development. That is already settled.
The question is whether teams treat it as a loose trick for speed, or as something that requires the same rigor as any other critical part of how changes are made and delivered. AI in development is no longer an add-on. It is becoming part of the foundation. The teams that do well will not be the ones that generate the most code — they will be the ones that build the strongest system around it.
Sources
- Octoverse: A new developer joins GitHub every second as AI leads TypeScript to #1 (GitHub, 2025)
- New public preview features in Copilot code review: AI reviews that see the full picture (GitHub, 2025)
- About GitHub Copilot cloud agent (GitHub Docs, 2026)
- GitHub Copilot tutorial: How to build, test, review, and ship code faster (GitHub, 2025)
- 2025 DORA State of AI-Assisted Software Development (Google Cloud / DORA)
- From adoption to impact: Putting the DORA AI Capabilities Model to work (Google Cloud, 2025)
- TDD and AI: Quality in the DORA report (Google Cloud)
Continue Reading
Browse all journal entriesIf this article was useful, there are more notes on architecture, AI workflows, delivery, and engineering practice in the journal.