“Re-engineering is the fundamental rethinking and radical redesign of business processes to achieve dramatic improvements in critical contemporary measures of performance, such as cost, quality, service and speed.”
— Michael Hammer and James Champy, Reengineering the Corporation¹
“As companies get larger and more complex, there’s a tendency to manage to proxies… A common example is process as proxy. Good process serves you so you can serve customers. But if you’re not watchful, the process can become the thing. This can happen very easily in large organizations. The process becomes the proxy for the result you want. You stop looking at outcomes and just make sure you’re doing the process right.”
— Jeff Bezos, CEO of Amazon; “2016 Letter to Shareholders”²
This chapter begins with a caution.
Why is it that the CEOs of Amazon, Netflix and Salesforce all disparage process? Processes exist everywhere. Nothing in the enterprise can be accomplished without following a process. Processes can be run poorly or well; surely it is valuable to improve a process. So what do these CEOs know that the rest of us don’t?
Bezos, Hastings, Benioff and others know that process redesign, if done properly, plays an important role in the fit systems enterprise. But they also know that a focus on process is fraught with risk. It is so very easy for process to become the thing. Process is never the thing. Process must always serve business outcomes.
In the Quality era of the eighties and nineties, leaders operating from the structural-functional leadership paradigm mobilized Six Sigma projects and Kaizen throughout their enterprises, seeking to create radical efficiency. Books like Michael Hammer’s Reengineering the Corporation presented a road map. In many G2000 enterprises, process redesign continues to be seen as a primary source of competitive advantage. Process outputs are tightly measured. Inefficiency is expunged. Mastery of process is a coveted virtue.
But when a company makes process efficiency the king, it can quickly lose sight of what really matters.
Consider that the biggest business outcome improvements often occur completely independent of processes. A big channel partnership deal started with an inspired conversation at a cocktail party. A new product breakthrough started with a sudden flash of insight after three weeks living inside the customer’s work environment. The breakthrough happened when a highly talented person creatively engaged an important problem with deep caring. You can’t force that into a process. It’s anti-process.
In The Loop leaders work fiercely to protect these breakthroughs, and the people that generate them. It is far too easy for process to kill the goose that lays golden eggs. In the fit systems enterprise, it’s the job of leaders to make sure that doesn’t happen.
I have mentioned before that in today’s fast changing world, the most important domains in the enterprise are those that exhibit patterns of high variation. This starts with product discovery system domains, where new value breakthroughs are pursued. Next in importance are the domains in the product management system. After that come the high variation domains in the revenue engine system, such as marketing campaign management and sales opportunity management.
In these high variation domains, processes are certainly important. You need to keep the trains running on time. But the most important thing is not process. It’s the better definition of problems, and the generation of brilliant hypotheses to fix them. Brilliant hypotheses emerge from 10Xers who are highly skilled and care deeply about the problem domain. The generation of a brilliant hypothesis is a pre-rational, creative act. It happens when a highly qualified person immerses deeply in a problem, watching, pondering and suffering through lateral drift, until she comes upon an insight or series of insights. These insights are hypotheses.
After a hypothesis has been generated, process can take over. Some version of the scientific method can proceed. But In The Loop leaders value those rare human beings who are uniquely capable of generating a steady stream of high quality hypotheses. They appreciate that in high variation environments, process is secondary. Understanding the real problem and developing promising hypotheses are primary. If process runs even the slightest risk of constraining the creative state of 10Xers, process must stand down. If a focus on process causes too much energy to be spent internally at the cost of an outside-in, generative-first mindset, process must stand down.
So before I dive into the mechanics of process redesign, let there be no doubt: process serves results. Process must never cause people to turn excessively inward, compromising the enterprise’s generative-first orientation. Process must never hamper a 10Xer’s search for truth.
Results-Centric Process Redesign
The most important step in strategic planning is to define the enterprise’s bounded purpose. A bounded purpose states “what must be true” — the value inflection point that must be hit to achieve the next stage of growth. This value inflection point must be defined in terms of generative and adaptive outcome objectives, bounded in time.
At any given time, one system or another is the bottleneck standing in the way of achieving bounded purpose. If legacy products are in decline, then the product discovery system or corporate development system (acquisitions) is the bottleneck. If the problem is suboptimal marketing and sales disciplines, then the revenue engine system is the bottleneck. The first step in enterprise-level process redesign is to name the bottleneck.
Systems and domains are comprised of stocks, flows and feedback loops. Stocks rise and fall. Cash is a stock. Employee motivation is a stock. Sales per month is a stock. Key Performance Indicators (KPIs) are all stocks. Flows are the transformations that occur inside systems and domains. A transformation causes a change in one or more stocks. This change triggers the next step in the process that advances system purpose. And feedback loops are the streams of data that flow out of stocks to inform and influence flows.
This chapter is about redesigning processes. A process brings together people, workflows, technology and money flows to deliver a business outcome. No matter how well it performs, a process can always be improved.
In the fit systems enterprise, process design is a core competency at every level. In The Loop leaders possess it themselves, and teach it to others. They design their organizations in a systems-centric way. The boundary of a system should capture whole processes wherever possible. So too with domains. A boundary has integrity when a preponderance of work occurs inside its edges, pursuing a unique purpose. System-centric organization structure increases the likelihood that end-to-end processes will be managed by one dedicated team or set of teams, making continuous improvement easier.
If a process exists within one domain, the domain team can redesign it. If it crosses domains but remains within one system, the leader of the system must mobilize a team to redesign it. If a process crosses systems, the CEO and executive team must mobilize a team to redesign it. This is because the redesign team needs to be free to rethink everything: workflow design, what roles are needed, and who participates. Participants may change. Roles may change. Teams may change. The redesign team needs to be chartered at the right level in the hierarchy so it has the freedom to re-conceive of the process without constraint.
In process redesign, the end in mind is an improved customer and shareholder experience. Notice the focus is not on serving an internal stakeholder. Every role in a company exists to serve customers (through creating value), and shareholders (through capturing value and creating profit). This is what makes a company healthy. Only a healthy company can maintain a healthy culture and offer lots of interesting, challenging roles.
In 1984, Eli Goldratt published The Goal: The Theory of Constraints.³ Thirty years later, the book is still a “must read” for business executives and software architects. It clearly articulates the fundamental principles of process design. These principles are equally relevant for designing human and digital workflows.
The Goal of Process Redesign
Goldratt argued that the overall goal of an enterprise system is to maximize net profit, ROI and cash flow. These three outcomes are achieved by increasing throughput while reducing inventory and operational expense. Throughput is the rate at which the system generates money through sales. Inventory is the amount of money the system has invested in purchasing things it intends to sell. In today’s world, where products are increasingly digital, “inventory” can be conceived as the costs that go into preparing a product for customer launch. Operational expense is the money the system spends in order to turn inventory into throughput. To optimize performance, the top priority is throughput gain. Inventory cost reduction is next, followed by reduction in operational expense.
The Tools of Process Redesign
These improvements happen inside processes. Effective process redesign leverages four variables:
- Money flows
A process exists to achieve some transformation. The transformation might be physical, such as the assembly of a machine in a factory. Or it might be digital — for instance, completion of a customer payment workflow within an e-commerce site. Or it might be a human service, such as the replacement of a tire or the provision of tax accounting services. Regardless of the type of process, its capacity is the capacity of its bottlenecks.
Process Redesign Concepts and Methods
A step can either be a bottleneck step or a non-bottleneck step. A bottleneck is any step whose capacity is less than the demand placed upon it. An hour lost at a bottleneck is an hour lost throughout the system. The transformation that occurs at any step is performed by a resource. The resource may be a person, or a machine, or a computer.
Bottlenecks attract backlogs. In fact, this is how you find bottlenecks. Small backlogs are fine — they are buffers that ensure consistent flow through a bottleneck step. But if they grow past some modest buffer level, they become a source of sub-optimization. Backlogs are caused by dependent events and statistical fluctuations in the flow of materials (or data). To avoid backlogs, you design the process so that any items that need to go through the bottleneck are addressed early in the process. This includes completion of any non-bottleneck steps that are ahead of the bottleneck in the workflow.
Non-bottleneck steps should run at the same throughput rate as the bottleneck step — not at their own potential capacity rates. Otherwise a backlog will emerge at the bottleneck. In other words, the capacity of the bottleneck determines the proper pace of work through non-bottleneck steps. This, of course, requires that you have feedback loops in place — so that the non-bottleneck step can be regularly updated as to the bottleneck step’s throughput rate.
Once you know when the materials (or data) that have been transformed and have passed through the bottleneck step must reach final assembly, you can calculate backwards to determine the timing of release of all materials (or data) in the process. By this means you can ensure that the release of materials through non-bottleneck steps occurs at a pace consistent with the capacity of the bottleneck.
These principles apply at the level of a domain process, a system-wide process and an enterprise-wide process. At an enterprise level, the ultimate idea is to match the pace of work through the enterprise’s bottlenecks with market demand while reducing inventory at non-bottleneck steps and reducing operational expense. As you optimize internal processes, you can shift focus to increasing market demand.
In improving workflows, a process map is a helpful tool. Here is an example of a swim-lane style process map. In this case, the swim lanes are all roles. But a technical platform could also be a swim lane. It would show the points of interaction between role and tool that enable the transformation to move forward.
For context, this particular process map is a “double click” of the flow arrows shown in orange on the overall revenue engine system map below:
In process redesign, when possible move bottlenecks to the beginning of a process. That makes it easier for the bottleneck to set the pace for the rest of the process. If people are needed to execute the bottleneck step, top performers should occupy these positions. If the resource that executes the bottleneck transformation is a machine or a computer, its performance is a first-order priority. Be sure not to build a separate quality control step into a process. Quality control should occur within a step, not after it.
For every process step, there are four sub-steps:
- Set-up (the wait time awaiting the resource necessary to work on a part or data)
- Process time (the time actually working on the process step)
- Queue time (the time a part spends in line for a resource while the resource is busy working on something else ahead of it)
- Wait time (the time a part is waiting for another part or data)
For parts or data that go through bottlenecks, queues are the primary issue. To solve for queues, cut the batch sizes that flow into bottleneck steps. This simplifies processing. For parts or data going through non-bottleneck steps, waits are the primary issue. Non-bottleneck steps can become capacity constraint resources if their sequencing of work creates holes in the work-in-process buffers in front of bottlenecks. In general, it is best to work on parts or data in the sequence in which they arrive (first come, first done). This should cause fewer holes in buffers, and will simplify tracking of parts or data.
In a world of customized products and many customer configuration permutations, processes often must solve for variation. Variation can express itself in multiple use case scenarios, or in multiple ways of doing the same thing. Processes improve when variation is narrowed. When new work enters the process, try to pinpoint its permutation right away, so that it can be sent down the proper route. This reduces variation at later process steps. If the same thing is done different ways by different teams, figure out the best way to do it and get everyone aligned. Cut variation whenever you can.
Whether it be in a digital flow or a human flow, eliminate unnecessary steps. Extra steps sub-optimize because they waste time and increase failure risk. If multiple sequential tasks in a process are able to be performed without too much cognitive overload by one person, they should be. That makes the job more interesting, which increases motivation. But it also reduces the risk of handoff failures. Reduce checks and controls. Minimize approvals. The process itself should be built for quality.
Workflows can be improved through technology. Once upon a time, information could appear only in one place at a time. Today, with continuously updated databases and instant communications, information can appear simultaneously in as many places as is needed. Once upon a time, complex processes required senior experts. Today, machine learning-based decision systems can help generalists perform expert work. Once upon a time, many roles were comprised of simple, repetitive tasks completed over and over. Today, robotic process automation can free humans from monkey work. Consider how technology can be introduced to revolutionize process performance.
Lags, Oscillations and Archetypes
Process design must account for lags. Lags increase risk of oscillation. In The Fifth Discipline⁴, Peter Senge described a game created at MIT to help business students and leaders better understand oscillation risk. It’s called the beer game. In this game, there is a retailer, a wholesaler and a brewery manager. Each player’s goal is to maximize profit.
In the game, there is a four week lag between a retailer’s order of beer from the wholesaler and its arrival at the retail store, and another four-week lag between the wholesaler’s order of beer from the brewery and its arrival at the wholesaler’s warehouse.
The retailer likes to keep 12 cases of beer on hand at all times, as a buffer to meet variations in demand. One week, the retailer experiences a doubling of the normal customer demand from four to eight cases. When the eight cases are sold, he has just four cases left on the shelf when the next week’s shipment of four cases arrives.
The retailer responds by doubling his normal four-case weekly order from the wholesaler to eight for that week. Demand continues high in Week Two; eight cases is sold again. Because of the four-week lag between ordering and the retailer’s receipt of the ordered beer, he still only receives four new cases of beer that week, which he quickly sells, leaving him empty. This causes him to increase his order for the next week to twelve cases, to make up for the lost buffer.
In Week Three, only four cases arrive again, which he quickly sells. Again he’s out of beer. Frustration rising, he orders sixteen cases. He keeps increasing the size of his orders, becoming more and more frustrated that he can’t get ahead of demand. This pattern ripples up to the wholesaler. He has the same problem with the brewery — as the retailer’s orders grow, the wholesaler falls short. He increases orders to the brewery. But the brewery can’t produce more beer that fast. The wholesaler reacts to under-shipments by increasing order size to make up for the gap.
Retail sales quickly settle at the new, higher level of eight cases a week. By this time, orders have built up in the system. The retailer has been channeling his frustration into ever growing order requests. The wholesaler, facing an under-stocking of a popular beer, has similarly increased his orders to the brewery. The brewery, thinking its beer has become an overnight sensation, has cranked up its production to meet expected future demand.
Inevitably, the compound effect of the ordering pattern floods the system with oversupply. Retail orders collapse. Wholesale orders collapse. The brewery, wholesaler and retailer are all left with a huge overstock of beer.
Wherever lags exist in a process, oscillation is a risk. You avoid oscillation by designing the process in its end-to-end entirety, and viewing it holistically. If you’re focused on a supply-side process, you include in your design the external supply chain, comprised of supply partners, and then you track demand. If it’s a demand-side process, you include the entire demand chain, comprised of channel partners and customers, and then you track supply. This holistic approach helps you keep ahead of the effect of lags.
Lags can lead to archetypes — such as the accidental enemies archetype, or the escalation archetype. You are more likely to avoid these if you understand that lags that are inherent in processes. This mental mindset will help you to react cautiously when you see an upward or downward spike in demand or supply.
Lags can also create issues inside technical systems. Reactive microservices architecture is a compelling architectural style in part because it leads to technical systems more capable of dealing with lag and oscillation risk. With services linked by asynchronous message passing, back-pressure capability built into message queues, and the capability to increase and reduce compute power on the cloud to meet changes in demand, the issues with lag and oscillation in digital systems have been somewhat reduced.
Technology can also reduce lags in human workflows, reducing the risk of oscillation. It does this by giving the humans inside systems and domains up-to-date data about the status of key stocks. This form of digital leverage increases both efficiency and resilience.
Quality = Results of Work Efforts / Total Cost
If you can increase the quality of a process while reducing its cost, you have improved profit, ROI and cash flow for the process and the enterprise. Costs go down when rework is reduced, when human labor is replaced with automation, when process input cost is reduced or when the volume of required process inputs is reduced per unit of output.
In the fit systems enterprise, self-organized teams pursue continuous process improvement by embracing a culture of Kaizen. There are four steps in the continuous improvement of a process:
1. Identify its bottlenecks
2. Decide how to exploit the bottlenecks
3. Subordinate everything else to the above decision
4. Elevate the bottlenecks
5. When a new bottleneck emerges that replaces an old bottleneck, go back to Step 1
Process redesign is a core competency in the fit systems enterprise. Done properly, process improvements are a good thing. They can increase resiliency, scalability and efficiency. But In The Loop leaders take care to ensure that process is always in service of business outcomes. They fiercely protect 10Xers inside high variation domains from any process that might impinge on creativity. In The Loop leaders know there is nothing more valuable than a 10Xer’s hypothesis. No process can be allowed to intrude on that.
- Hammer, Michael, and Champy, James. 1993. Reengineering the Corporation: A Manifesto for Business Revolution. Harper Business.
- Bezos, Jeff. 2017. “2016 Letter to Shareholders.” https://blog.aboutamazon.com/company-news/2016-letter-to-shareholders
- Goldratt, Eliyahu M. 1984. The Goal: A Process of Ongoing Improvement. North River Press
- Senge, Peter M. 1990. The Fifth Discipline: The Art & Practice of the Learning Organization. Crown Business.
- Beergame.com: The home of the beergame. https://beergame.org/