Is DevOps Dead or did it ever Live ?

Jon Brookes

2025-11-04

Photo by Pixabay: https://www.pexels.com/photo/yellow-dead-end-sign-during-day-time-163728/

In a previous article, google scale and over engineering I identified factors contributing to increased technical debt in many areas of IT. This has, I believe contributed to an ongoing debate around DevOps. Platform engineering is by some described as ‘DevOps done right’. Others question this, arguing that DevOps is dead’.

A review of where and why DevOps came into being can give us an answer to the latter. I would argue that the DevOps in common use today is not the what it was purposed to be. It is likely and logical therefore to conclude that DevOps, as currently implemented, is dead or dying.

DevOps timeline therapy

What went wrong? DevOps promised so much, why has it failed in the eyes of some?

A time line therapy analysis may shed some light.

I am by nature a hybrid, having trained as such with a guy called Alan Eardly of Staffs Uny under their School of Computing and ‘Technology Management’ - a new term for the 90s and was heralded as a new way of working. It was predicted that the way we work in the next century would be different than were traditional, long term, single employer based. In fact, this trend had already began, where people were finding themselves changing career path not just once but multiple times in a working lifetime.

Hybrid working and hybrid management was already a part of Japanese manufacturing processes and ideologies such as ‘just in time’, ‘Kanban’ and ‘Continuous Improvement’. These were most often applied to manufacturing process but managers of such companies were expected and trained constantly to work in multiple environments. A tour of duty could take managers through accounts, human resources, industrial design, manufacturing, systems architecture and so on. A process limited by the size and diversity of the organisation.

The result ? A multi skilled workforce. This principle was appled to manual workers, not just managers. When production requirements change or problems arise, workers literrally dropped tools and moved to an entirely different workstation to assist where help is needed. A sort of hive mentality comes to mind.

At this same time, traditional software engineering and service management were stuck firmly in the past. The ‘waterfall’ model was widely accepted as the way to create software, manage software into production and manage new software and features.

Code was often written in a procedural manner and a new kid of the block was Object Oriented, where software re-use was seen as the way to break the chains of traditional software design and enable the re-use of code using object inheritance and class based design principles.

The Agile Manifesto led to ‘agile development’ becoming a common way of working in technology and software development. Most job ads headlined ‘agile’ necessary for all programmers and engineers to get work.

Big tech started to release features and new software and services at a rapid rate. The old school of waterfall often took months, more often years to complete software features and product. Instead, now the likes of Google were iterating existing software solutions in months and even days. The term ‘beta’ was applied to that which was perceived as ‘stable’ but still, we used ‘beta’ software from Google, confident that it would work for us and we embraced rapid change.

This was good for Google, bad for others. Microsoft found themselves falling behind badly. They needed to go through a root and branch review of their entire philosophy. They embraced open source and declared their ‘love of linux’. A surprise to many, more so in the open source community of the day.

Azure became the agent of change for Microsoft to regain lost ground where Amazon Web Services had taken the floor in Virtual Machine based hosting. Microsoft had to rid themselves of up to a third of their staff that could not make the change to open source and continuous development.

The earlier world I mentioned, that of Kanban and continuous improvement did not stay in the manufacturing plants. It found its way into software delivery and is reflected in continuous delivery and echoed in continuous development.

Older, blocking and delayed ways of working are now giving way to new, continuous and short feedback loop based workflows.

But still there was a problem. Typically job roles in technology fell into one of two camps. Developers (dev) and Operations (ops). Two silos of knowledge, expertise and often a cause for tension and division.

When Microsoft had their soul searching exercise to embrace open source, they also took a look see at DevOps and realised that Azure DevOps, their flagship product for the same, was not being used in 2/3 of their business. Each division so to speak, such as Office, MSSQL etc, had their own release pipelines and none of them used AzureDevops. The only people using Azure DevOps were customers outside of Microsoft. So this also needed review and managed change for Microsoft to adopt its own DevOps and to start using its own products for it.

Microsofts acquisition of GitHub which itself had already started on a similar path with their version of pipelines being called workflows, became a transition point for Microsoft and Azure Devops users over time.

If there was a Devops manifesto, similar to that of the Agile one, it would had said I believe that in order to meld the parting of the seas betwixt Dev and Ops and to adopt DevOps pipelines, continuous development and continuous delivery for all, it would have been:

We all need to learn and adopt devops practice, learning to code where appropriate and necessary to do so, as we are developers or operational workers and we adopt just in time thinking and Kanban thinking to work as one. We move to where the work needs done, we learn multiple skills. We become hybrid workers.

What happened in actuallity though was this. Many companies saw the answer to the divide between dev and ops to mean, - we need to get a devops person. A third silo was brought into existnce. Further fragmentaion of job responsibilities took place. Less information shareing between departments took place. More so, employers sought to decrease the ‘salary burden’ just as it happening now with AI and agentic workflows.

In my earlier article I outlined the kind of work I have seen in the industry where devops has ran as as its own operational unit in this new mode and also how this has led to over engineering and technical debt.

Where are we now and is Devops Dead ?

We find ourselves in a world of software and service delivery that DevOps promised to be more unified between Dev and Ops but is’nt.

Rather than a Utopian, happy balance, we have further fragmentation, 2 walls between 3 silos, dev, devops and ops. Natural attrition has led to developers jumping ship to higher paying devops roles. Operations staff have atrified, not training to become devops or developers. Some companies have shed operations altogether, others have just renamed ops as devops.

The divide between dev and ops remains alive and well. Devops folks have started to burn out and some will leave the role to be replaced by developers with little of no knowledge of ops.

The ideal of hybrid workers and workflows are a far and distant dream.

Devops, as I understood it is not a career choice, rather a way of thinking, a philosophy of shared understanding and skills. Ops folks that work on call could easily swap to dev roles when needed and so too could developers when on call patterns require increased cover. Each pillar of knowlege would share alike between each other. Instead the opposite has happened

We need regulation in devops. Over engineering and un-managed reactionary solution-ising has led to overly complex infrastructure code bases that nobody understands aside of staff that have left to go to the next big challenge.

Prescriptive measures will be needed to fix the patients broken devops limb.

Some wont like it, true, but you can’t have a sprawling mess of complexity and technical debt and call this high velocity, expecting the same kind of CI/CD that Google can achieve even when your a large to enterprise operation but have now unmanageable technical debt.

A simpler approach is needed, not a more complicated one.

Making things simpler is not stupid. Some things can be stupid simple and still achieve something close to what a highly complex architectural design and software design pattern could achieve but could as well achieve if simply minimal and viable.

This is the thinking behind minimal viable Kubernetes or MVK as was discussed earlier in this article.