Software Delivery for a Changing System
Who this is for
This post is for software engineers and technical leads who are building or maintaining a software delivery pipeline. It assumes familiarity with Git, containers, and automated builds. It does not assume prior knowledge of GitOps, release manifests, or any specific tooling.
If you are already running a mature pipeline with trunk-based development and GitOps, the early sections will be familiar. The later sections on manifests as a master SBOM and the separation of packages from service images may still be worth reading.
Three terms that often get used as one
CI/CD is shorthand for three distinct practices that each have their own purpose, their own outputs, and their own boundaries. Understanding where one ends and the next begins is the foundation for everything that follows.
Continuous Integration is the practice of merging code into a shared branch frequently and validating each merge automatically. The goal is to catch problems early, while the change is still small. A CI pipeline runs linting, builds, tests, and security checks. It produces a pass or fail verdict. That is its entire job.
Continuous Delivery is the practice of keeping the codebase in a state where a release is always ready. This covers versioning, packaging, and publishing artefacts to registries. Delivery stops short of deploying to production; it prepares everything so that deployment can happen at any point.
Continuous Deployment takes the final step: applying a release to a target environment, automatically or with a human approval gate. Most teams use a combination: fully automated deployment to development and staging, with an approval step before production.
Each stage has a clear input, a clear output, and a clear boundary. A failure in one stage does not propagate to the next.
The foundation: trunk-based development
The starting point for the pipeline is a trunk-based Git workflow. All active development flows into a single main branch. Feature branches are short-lived. There are no long-running release branches to reconcile.
This shapes the pipeline in useful ways. Everything that enters the trunk is expected to be releasable, which means CI needs to be thorough enough to give genuine confidence, and fast enough that developers are not waiting.
Continuous Integration: validate, nothing else
CI has one job: establish that the code on the trunk is correct and safe to release.
In practice that means:
- Linting: code style and static analysis
- Build: confirm the code compiles or bundles cleanly
- Tests: unit, integration, and contract tests as appropriate
- Security audit: dependency vulnerability scanning, SAST, container image scanning, and infrastructure-as-code policy checks (Checkov, tfsec, or equivalent)
CI does not assign versions. CI does not publish artefacts. It passes or fails. A passing result means the code is a candidate for a release.
Versioning: derived from commit intent
Once code passes CI, we generate a version. The most practical approach, especially in a monorepo, is Conventional Commits: a commit message convention that encodes whether a change is a fix, a feature, or a breaking change.
From those signals, a tool like Release Please generates a semantic version automatically:
- A
fix:commit produces a patch bump (1.0.0 to 1.0.1) - A
feat:commit produces a minor bump (1.0.0 to 1.1.0) - A
BREAKING CHANGE:footer produces a major bump (1.0.0 to 2.0.0)
This works well for monorepos because each package or service can be versioned independently. A change in one component does not force a version bump in an unrelated one.
Versioning propagates through the stack. When a library package gets a new version, any container image that depends on it should also receive a version bump, because its dependency graph has changed. The same applies to infrastructure configuration: a configuration change is a change to the release, and the version should reflect it.
Continuous Delivery: publish artefacts
Once a version is assigned, the delivery pipeline publishes the artefacts:
- Packages (npm, Maven, pip, etc.) are pushed to a registry: a private Verdaccio, Artifactory, GitHub Packages, or similar
- Container images are built, scanned, and pushed to a container registry (ECR, GCR, ACR) with the semantic version tag
- Infrastructure templates (CDK, Terraform modules, Bicep components) are versioned and stored accordingly
At this point, every artefact has a known, immutable version. Nothing downstream needs to rebuild or repackage them.
The release manifest: a master SBOM for the whole system
A release manifest is a single document that describes everything required to run a complete version of your system:
- Which packages are included, at which versions (npm, Maven, pip, etc.)
- Which container images are referenced, with their exact digest or tag
- Which infrastructure components are declared, whether that is CDK constructs, Terraform modules, Pulumi stacks, Bicep templates, Kustomize overlays, or ArgoCD Application definitions
This makes the manifest a Software Bill of Materials (SBOM) for your entire release: not just the application code, but the full stack.
Manifests can be named in whatever way suits your release process: semantic versions, calendar dates, sequential numbers, or meaningful names. The naming convention matters less than the consistency. Every release has one manifest, and every manifest pins every component.
The manifest is configuration as code. It lives in your Git repository, is reviewed like any other change, and becomes the authoritative record of what was deployed and when.
Continuous Deployment: apply the manifest
Deployment has one job: apply what the manifest says to the target environment. It does not assign versions. It does not build artefacts. It reads the manifest and reconciles the environment to match it.
With a GitOps operator such as ArgoCD or Flux, this happens automatically: the operator watches the manifest in Git and keeps the cluster state in sync. Environment-specific differences such as replica counts, database connection strings, and resource limits are applied through overlays (Kustomize patches, CDK environment contexts, Terraform workspaces) on top of the base manifest. The manifest itself does not change between environments.
Rollback is git revert. Audit is git log. If you need to know exactly what was running in production on a given date, the answer is in the manifest history.
Packages versus service images
One structural decision worth making early is to keep packages and service images as separate concerns.
A package is a library: a collection of functions, types, and logic published to a registry. It does not run on its own.
A service image is a container that references one or more packages and exposes a subset of their functionality as a running process: an API, a worker, a scheduled job.
In practice this means you publish a library independently before building the service image that uses it, unless the library is purely internal and scoped to a single service. If there is any chance a library will be referenced by more than one service, it belongs in its own package with its own version.
This gives you a useful path forward. You can start with a single container that bundles everything and break out functionality into separate services as scaling requirements emerge. Each service references the same versioned packages. The shared logic does not change; only the service boundary changes.
Starting with the monolith is entirely reasonable. The packages being independently versioned is what keeps the decomposition path open later, without needing to rewrite the underlying logic.
What developers own, and what they do not
Once the pipeline and package structure are in place, the separation of concerns for developers becomes clear.
A developer’s primary responsibility is producing working code. In practice, that means writing code that is correct, tested, and ready to be consumed by a service. It does not mean owning the infrastructure that runs it, the scaling policy, or the compute type.
Test-driven development fits naturally here. Unit tests validate individual functions and modules in isolation. System integration tests (SIT) validate how components interact, either by mocking external endpoints entirely or by introducing a test endpoint that simulates a specific part of the wider system landscape. Both approaches keep the developer focused on the code rather than on the full environment.
A service owner is a separate concern. A service owner decides which libraries to use, how to chain them, and how to expose their functionality. They are responsible for the service’s security posture: in a zero-trust architecture, perimeter security is assumed to be insufficient on its own. We assume breach. Each service authenticates and authorises every request, regardless of where it originates. The service is the security boundary, not the network.
Service owners are also responsible for the decomposition path. Following the MonolithFirst pattern, you start with a monolith and break out services when there is a clear reason to do so: a scaling bottleneck, an independent release cadence, a team ownership boundary. The monolith is not a temporary embarrassment; it is the correct starting point.
This clean separation of developer and service concerns has a practical consequence for infrastructure. You do not need to handle the difference between a long-running backend process and a short-lived event-driven compute task in application code. That is a deployment decision. The service image defines what runs; the deployment manifest and its overlays define how and where it runs. The developer does not need to think about it.
The role of domain-driven design
This architecture starts to break down if libraries are designed around technology rather than around the business domain they represent.
A library named after a technical concept, such as interfaces.ts or adapters.ts, will inevitably accumulate code from multiple unrelated areas of the system. Over time, it becomes difficult to understand what the library is actually for. Refactoring becomes risky because a change in one part of the file can affect something completely unrelated. Deprecating a discrete capability, changing a workflow engine, or splitting the library for a new service boundary all require untangling code that was never meant to belong together.
Domain-driven design addresses this by naming and structuring code around the business problem it solves. A library that reflects a bounded domain communicates its intent clearly. When the business needs change, or when a part of the system needs to be extracted into its own service, the boundaries are already visible in the code. You are not refactoring a pile of mixed concerns; you are lifting out a coherent domain.
The practical test is whether a developer can read a library name and immediately understand what business area it belongs to. If the answer is no, the library is likely a candidate for restructuring before it grows further.
Summary
| Stage | Responsibility | Not responsible for |
|---|---|---|
| Developer | Working code, unit tests, SIT | Infrastructure, scaling, compute type |
| Service owner | Library selection, zero-trust security, decomposition path | How or where the service runs |
| Continuous Integration | Lint, build, test, security audit | Versions, artefact publishing |
| Versioning | Semantic version from commit history | Building artefacts |
| Continuous Delivery | Publish packages, containers, IaC | Deployment decisions |
| Release manifest | SBOM for the release, config as code | Environment-specific config |
| Continuous Deployment | Apply manifest to target environment | Rebuilding or repackaging |
Each concern has a clear owner. The developer does not think about deployment. The pipeline does not think about domain design. The manifest does not carry environment configuration. When each layer stays within its boundary, the whole system becomes easier to reason about, to scale, and to change.
Tobias Lekman is a cloud and systems architect working across regulated industries. He consults through Lekman Consulting.