Skip to main content
Agility in Action: Mindset, Processes, and Real Results
  1. Blogs/

Agility in Action: Mindset, Processes, and Real Results

Author
Romano Roth
I believe the next competitive edge isn’t AI itself, it’s the organisation around it. As Chief AI Officer at Zühlke, I work with C-level leaders to build enterprises that sense, decide, and adapt continuously. 20+ years turning this conviction into practice.
Ask AI about this article

How much agility can software development really handle, and where does agility tip into chaos? In this episode of the “Modern Work 2 Go” podcast (in German), I speak with Florian Schneider about exactly these questions. We dive deep into a concrete real-world example: an agile transformation at a Swiss bank that I accompanied over eight years. The conversation covers the shift from waterfall to agility, scaling with SAFe, building value streams, and why continuous improvement is the central pillar of every transformation.

The Starting Point: Waterfall Meets Reality
#

In 2016, I joined a project at a bank that needed to make a regulatory system MiFID II compliant. The deadline was the end of 2017. The original plan was textbook waterfall: four teams, split by technical layers (UI, backend, database, services), each with a project manager, two business analysts, one developer, and one QA engineer. The plan called for six months of specification, three months of implementation, and three months of testing.

It was immediately clear to me that this would not work. After about two weeks, I proposed a simple experiment to the program leadership. We attempted a small vertical slice: implementing one small feature end-to-end with the existing teams and deploying it to a test environment. The result was a disaster. Nothing worked. But that experiment was exactly what triggered a change in thinking.

The Agile Shift: Domain-Driven Design and Honest Forecasts
#

We restructured the teams using Domain-Driven Design: identified domains and made the teams truly responsible end-to-end for their bounded context. Then we built a backlog, estimated epics, measured velocity, and created a burn-up chart.

The honest forecast was sobering: by November 2017, we could deliver exactly one third of the backlog. Two thirds were not feasible. Management naturally wanted to mandate weekend work and twelve-hour days. My answer: if we do that, we will probably deliver nothing at all. Instead, we consistently prioritized the backlog.

“By the end of the year, we had the system MiFID-compliant and in production. The forecast turned out to be exactly right: we delivered the necessary third. And the other two thirds? They simply disappeared in January. They were wishful thinking.”

That was a defining experience. Two thirds of the original backlog turned out to be unnecessary. The prioritized third was fully sufficient to meet the regulatory requirements.

Scaling: Why SAFe Alone Is Not a Solution
#

We scaled from four teams to ten. We introduced Scrum of Scrums, supplemented by selected SAFe elements like mini PI plannings and sync meetings. Scrum of Scrums would have been enough on its own. But the bank adopted SAFe organization-wide, and we were declared an Agile Release Train.

What we actually did: renamed the roles, reduced PI planning from the full two-day format to half a day, and in practice continued doing Scrum of Scrums. The decisive difference from other teams at the bank was that we kept the continuous improvement sprint.

Many SAFe implementations fail because this innovation sprint is the first thing to be eliminated, since teams “need” to deliver features. This makes SAFe purely by the book, because you never improve. Our Agile Release Train was one of the showcase ARTs, while other teams complained that they could not get any work done because of all the synchronization meetings.

My clear recommendation: SAFe makes sense when you have more than 50 people working on a product with hard dependencies between teams. But if you can resolve those dependencies organizationally, process-wise, and architecturally, you do not need the framework at all. Then teams can work autonomously, and that is the real goal.

Breaking Down Silos: IT and Business Merge
#

One of the most impactful experiences was the conflict between IT and business. The bank had a classic setup: IT silo and business silo. Our program manager sat in the business unit, while IT management insisted on the waterfall process. There were bets of several thousand Swiss francs that our project would fail.

When we delivered successfully at the end of 2017, IT wanted to pull the team back and return to waterfall. Instead, the business unit took over the development team entirely. This shortened decision paths and increased efficiency significantly.

This experience solidified a conviction that I have since seen confirmed in many projects: in forward-thinking organizations, the separation between IT and business will disappear. There will only be cross-functional teams working on a product, built on a shared platform.

From Value Streams to Organizational Change
#

Over the years, the bank introduced value streams. The first value streams were drawn along political lines and led to massive friction. But step by step, the organization learned. The real breakthrough came when value streams were given end-to-end responsibility, including budget and the principle of “you build it, you run it.”

Suddenly, things were cleaned up: applications that nobody knew existed, applications with five users and enormous maintenance effort, duplicate capabilities. Entrepreneurial thinking entered the value streams, and efficiency increased dramatically.

Today the bank is taking the next step and restructuring its organizational hierarchy as well: the Release Train Engineer becomes the manager of the developers, the Product Manager becomes the manager of the Product Owners. Process organization and reporting structure merge within the value stream.

AI and Vibecoding: Why Clean Software Engineering Matters More Than Ever
#

In the conversation, we also discuss the role of AI and vibecoding. At Zuehlke, we use AI consistently. It augments our work but does not replace people. We have our own Cybernetic Delivery Method that describes how we develop software together with AI.

What I learned from vibecoding: you have to specify precisely and write tests. For me as a DevOps thought leader, this is a dream come true, because AI forces us to practice clean software engineering. You need to describe clearly what you want, your tests need to be correct, and your software needs to be modularly architected so that the context stays small.

At the same time, we see in our monthly AI exchanges that many experienced software engineers use copilots for boilerplate code and tests but turn them off when it comes to truly new algorithms. AI is not a silver bullet. It cannot build an SAP system from scratch, and you need to proceed in small steps.

Key Takeaways
#

  1. Continuous improvement is the central pillar. Every day, ask what can be better tomorrow, and constantly work on the system. Without this pillar, every framework becomes bureaucracy.

  2. C-level support is essential. Without a clear mandate from the top, agile transformations get stuck at the level of individual teams. Big changes require big support.

  3. Dependencies are the real enemy. It is not missing tooling or wrong frameworks that slow teams down, but organizational, process, and architectural dependencies. Resolve those, and you need less coordination.

  4. Transformation requires patience. Not every organization can or must change overnight. Small, continuous steps can be equally effective, as long as you stay committed.

  5. Honest forecasts prevent wishful thinking. Prioritization based on data (velocity, burn-up charts) creates transparency. It often turns out that a large portion of requirements is not actually needed.

  6. AI makes clean engineering more important, not obsolete. Vibecoding and AI-assisted development only work with clear specifications, good tests, and modular architecture.