Skip to main content
How to Improve Value Streams: A Seven-Step Approach
  1. Blogs/

How to Improve Value Streams: A Seven-Step Approach

Author
Romano Roth
I believe the next competitive edge isn’t AI itself, it’s the organisation around it. As Chief AI Officer at Zühlke, I work with C-level leaders to build enterprises that sense, decide, and adapt continuously. 20+ years turning this conviction into practice.
Ask AI about this article

A value stream is the path that value takes from the first idea all the way into production. It is the sum of every step, handover, and wait in between. In this video, I walk through a simple seven-step approach for identifying a value stream, measuring how it really performs, designing a target state, and then improving it step by step. The numbers in the example are simplified on purpose, so the method shines through more clearly than any single result.

Step 1: Identify the Value Stream
#

The first move is always the same: pick a value stream and draw it. In my example I use a very classical, simplified flow. It starts with an idea, for instance a new feature. From there, we move on to writing the specification, implementing the feature, manually testing it, and finally manually deploying it into production.

This is not a complicated diagram. Five boxes in a row are enough. What matters is that you make the steps explicit, because you cannot improve what you have not seen. Once the value stream is on paper, everyone looks at the same picture and suddenly the conversation changes from opinions to observations.

Step 2: Identify the People in the Stream
#

The next step is to identify who is actually working in each step of the value stream. In my example, the business has the idea and also writes the business specification. The developers implement the feature. A quality engineer manually tests it. And operations manually deploys it into production.

Doing this exercise sounds trivial, but it quickly exposes how many handovers there are. Every transition between these groups is a potential source of delay, rework, and miscommunication. When you can literally see which hands touch a feature between idea and production, you start to understand why things take so long, even though no single person seems to be slow.

Step 3: Measure What Really Happens
#

Now we measure. For every step I look at three numbers: process time, lead time, and percentage complete and accurate.

Process time is the time actual work is being done. Lead time is the time from when a task enters a step until it leaves it, including all the waiting in between. Percentage complete and accurate tells us how often the work going into the next step is actually good enough, or how often it gets rejected and has to come back.

In the example, the numbers look like this:

  • Idea: process time 8 hours, lead time 8 hours, percentage complete and accurate 75 percent.
  • Writing specification: process time 40 hours, lead time 80 hours, percentage complete and accurate 50 percent.
  • Implementation: process time 40 hours, lead time 80 hours, percentage complete and accurate 75 percent.
  • Manual testing: process time 16 hours, lead time 40 hours, percentage complete and accurate 50 percent.
  • Manual deployment: process time 1 hour, lead time 8 hours, percentage complete and accurate 80 percent.

These three numbers per step are enough to have a very honest conversation about the state of the system. They are also something you can actually collect, without having to buy expensive tooling.

Step 4: Analyze the Current State
#

With the measurements on the table, we sum things up. In this example, the total process time is 105 hours and the total lead time is 216 hours. The rolling percentage complete and accurate is only 11 percent, which means that only 11 out of 100 ideas make it cleanly through this pipeline without rework somewhere. The activity ratio, total process time divided by total lead time, is roughly 48 percent.

During the analysis I also look at where the bottlenecks are, where handovers cause long waiting times, and where the gap between process time and lead time is the biggest. That gap is where your ideas are sitting in queues, not being worked on. In most value streams, that is where the biggest opportunities for improvement hide.

Step 5: Design the Target Value Stream
#

Once we understand the current state, we design the future. The target value stream uses the same structure, but the steps and the numbers are different.

  • Idea by the business: process time 8 hours, lead time 8 hours, percentage complete and accurate 100 percent.
  • Instead of writing specifications, the team writes user stories: process time 8 hours, lead time 8 hours, percentage complete and accurate 100 percent.
  • Implementation, again by the team: process time 20 hours, lead time 40 hours, percentage complete and accurate 80 percent.
  • Continuous integration replaces manual testing. The build server builds and tests automatically: process time 0.1 hour, lead time 0.1 hour, percentage complete and accurate 100 percent.
  • Continuous deployment replaces manual deployment, going into UAT and then production automatically: process time 0.1 hour, lead time 0.1 hour, percentage complete and accurate 100 percent.

The implementation step looks less perfect at 80 percent complete and accurate, and that is by design. The continuous integration system is allowed to reject implementations when a test fails. That is exactly what automated testing is for. The rework happens in seconds, not in days.

Adding it all up: the target has a total process time of 36.2 hours, a total lead time of 56.2 hours, a rolling percentage complete and accurate of 80 percent, and an activity ratio of roughly 64 percent. In plain terms: 80 percent of ideas go straight to production, and your people spend a much larger share of their time on actual work instead of waiting.

Step 6: Define the Measures to Get There
#

The target is not the plan. The plan is the set of concrete steps you take to move from the current state to the target. I compare both value streams side by side and identify exactly what needs to change: what has to be automated, which handovers can be removed, which roles need to shift, which architectural changes are needed for continuous integration and continuous deployment to work.

Then I prioritize those measures and put them into a backlog. This is important: improving a value stream is not a big-bang project. It is a sequence of small, prioritized changes that you execute together with the people in the stream.

Step 7: Repeat the Exercise
#

The last step is the most important one, and it is the one most organizations forget. Value stream analysis is not a one-off exercise. It is something you repeat. I would typically do it every three months and look at how the stream has moved toward the target.

The reality on the ground changes. New tooling becomes available. The team learns. Bottlenecks shift. Without regular reviews, the value stream drifts back into old patterns, and all the effort you invested earlier quietly evaporates. If you repeat the exercise, improvement becomes a habit instead of an event.

Key Takeaways
#

  1. Make the value stream visible first. Draw the steps from idea to production and name the people in each step. You cannot improve what you have not made explicit.

  2. Measure process time, lead time, and percentage complete and accurate. These three numbers are simple, honest, and enough to expose where the real problems are.

  3. Pay attention to the gap between process time and lead time. That is where your work sits in queues. Most of your lead time reduction will come from removing waiting, not from working faster.

  4. Design a target value stream, then engineer toward it. Replace manual testing with continuous integration, manual deployment with continuous deployment, and specifications with user stories written by the team.

  5. Prioritize a backlog of concrete improvements. Compare current and target, identify the measures needed, prioritize them, and execute them like any other piece of work.

  6. Repeat every three months. Continuous improvement is the real output of this exercise. One analysis alone changes very little; a cadence changes the organization.