My current client has an enterprise CI/CD pipeline that is maintained by a centralized team that does not use it. Its product owner(s) do not rely on its output for their revenue. They are not measured on its performance or reliability.
And yet we are obligated to use this pipeline. There’s no other option. We were also handed a microservices architecture from the Architecture team so our application is comprised of about 20 separate services, each with their own deployment pipeline, thereby magifying our sensitivity to the whims of the enterprise CI/CD pipeline.
The pipeline’s primary purpose is not to ship software - it’s to enforce each and every step in the organisation’s deployment process. It’s a semi-automated box-checker.
I say “semi-automated” because the pipeline requires constant encouragement and attention to deploy an artifact. For example, when deploying a binary to our dev environment, a person has to check back approximately every 10 minutes to click each of the four evenly-spaced confirmation dialogs. In some cases, different people have to click different confirmations, and the pipeline patiently waits for each of these click before proceeding. Even when all of the confirmations are affirmed immediately, it takes about an hour to deploy a pre-built binary to the dev env. The dev environment is the first of four environments on the path to production.
One hour to deploy one binary to one environment.
The cost of this cannot be understated.
The first, and most obvious, cost is that we have hired four people as dedicated pipeline button-clickers. They literally spend their day clicking buttons to keep the pipelines moving. It’s a full-time job.

The code from Lost
The pipeline limits how frequently we can get new builds in the hands of our testers. Testers are often kept waiting. This is exacerabated if a bug fix didn’t work - the tester waited an hour, spent a few minutes testing, then has to wait another hour or two for another patch.
Developers make questionable decisions because they’re acutely aware of how long it takes to release changes. They push more behaviour into the configs because configs can be changed without a deployment. They put more debug statements in the code because, y’know, “just in case” - they know that adding them later could take a day or more.
The elongated feedback cycle means developers lose context. They’re already working on the next thing by the time bug reports or tickets for missed requirements get filed so instead of fixing issues straight away they go into the backlog. This makes testing cycles take longer than necessary.
Pull requests are bigger than they should be because developers feel pressure to ship complete features with each PR instead of small incremental changes. They push their code less frequently. This increases the risk of each change, increases the code review burden, and increases the costs sunk into potentially poor design choices.
Our testing team has to test every permutation, every edge case, every failure scenario, because our slow pipeline artificially inflates the cost of a production deployment. If a defect is found during post-deployment verification then the release will be rolled back, and we have to reschedule the weeks-long deployment. So we cannot have any defects. So testing cycles take forever. The fear runs deep.
And then there’s the human cost. Production deployments have to be done overnight because they take so long - the Business does not want to run single-site during peak business hours. And the pipeline can’t do rolling updates. And we need a sufficient rollback window so there’s only enough time in the deployment window to deploy one or two (of twenty) services in a night so now we’re running overnight deployments most nights and burning our people out.
Your
ego’sproduct manager’s writing cheques yourbodypipeline can’t cash
Everyone feels the pressure. We’re being pushed to meet timelines that our pipeline cannot meet. But never underestimate a developer’s drive to automate.
So we built a shadow pipeline that can reliably deploy a build in 45 seconds. We had to do this to meet our timelines. It was a game-changer. We can now get builds to our testers quickly. We can iterate. We can make incremental changes.
We went out on a limb and put ourselves at risk by doing this even though we used existing capabilities (ssh) and approved tools (Ansible) to do it so we keep it on the down-low for fear of retribution. We should not have had to build this pipeline, and I resent that I had to make this decision.
We cannot use our shadow pipeline for the pre-prod or production environments because it would violate dozens of corporate policies but we’ve reached a tipping point in the program and we need to be able to ship faster. We can no longer afford to spend weeks drip-feeding our updates into production.
So I have filed paperwork requesting approvals to use our shadow pipeline for the pre-prod and prod deployments despite the risk that they decline the approvals AND ban us from using it entirely, even for the dev environments which I’m certain would cause most of the team to quit.