Every step towards your generative and adaptive imperatives occurs inside domains. Bounded by the stage of your company and by the constraints of your customer LTV profile, you must steadily increase customer value, become more resilient, become more scalable and become more efficient.
Sometimes improvements come in big leaps. Perhaps you just signed a big channel partnership deal, or just released a promising new product, or transitioned to a massively scalable new accounting system. More often, improvements come in baby steps. You slightly increased your bottom funnel conversion rate, or you reduced the average days outstanding in accounts receivable. Or you reduced customer onboarding from thirty days to twenty-eight days.
Your generative and adaptive imperatives are advanced through a mix of big leaps forward and small incremental steps forward. The more of these you can engineer, the better. Most improvements happen inside domain teams. How do you prepare domain teams to be successful? It’s the job of leaders to support domain team success. This requires that you:
Define the Domain and Team
- Define the boundaries of the domain
- Decide the type of team responsible for managing the domain: an operational team, a technical development team, or both
- Define the roles within the domain team
- Select a great team leader
- Right size the team
Define Business Outcome Objectives, Metrics and Incentives
- Define the desired business outcomes (OKRs) of the domain
- Confirm these outcome objectives align with the objectives of adjacent domains
- Confirm the domain objectives advance its system’s purpose and the overall enterprise’s bounded purpose — ensure both hierarchical and horizontal alignment
- Clarify to the team its business outcome objectives
- Align incentives with business outcomes
Elevate Team Competency and Performance
- Increase self-organization of the team: assign responsibility and delegate authority to choose the path that best achieves its business objectives
- Increase functional competency of team members through coaching and development
- Increase the density of high performers on the team
- Improve the competency of the team leader through effective coaching
- Ensure effective coordination of handoffs between adjacent domain teams
Build Data Infrastructure and Technology
- Add relevant data feedback loops and make them available to the domain team
- Invest to improve the technical systems that support workflows within the domain and between domains
Once teams are in place, they are responsible for domain improvement. Four factors guide their improvement path.
First, the team must consider the type of problem domain. Is it a low variation domain, such as accounts payable? Or a high variation domain, such as dev teams, AE sales teams or the executive team? In low variation domains, the problem types typically follow a consistent pattern. In high variation domains, the problems vary.
Second, team members must take into account their team’s type. Are we talking a technical team (i.e. a dev team), focused on digitizing a workflow? Or an operational team, which is focused on executing high quality human steps in the workflow?
Third, the team considers the least invasive path for executing any change: can it be achieved via ad hoc individual action? Or is collaborative continuous improvement required? Or is it such a significant change that a formal project is required?
And fourth, it must consider the improvement scope. Is it within the domain team’s full control? Does it require collaboration between an operational team and technical team, both working on the same domain? Or does the change in question cut across domains, but within one system? Or does it cross systems? These considerations will impact who needs to be involved in the change.
All of these factors impact the way forward.
There are many permutations of domain teams: low variation or high variation, operational or technical, uni-functional or cross-functional. Each team has its nuances.
Uni-Functional, Low Variation Domain Teams
In a uni-functional, low-variation domain such as accounts payable, three identical operational domain teams might each be led by a worker / team leader. The leaders and workers all report into a functional manager, like this:
In this case, the presence of working team leaders enables the manager’s management span to widen to twenty-seven workers. Because the work is low variation, the functional manager can focus on coaching the three team leaders. She must also coach other workers, but since work is low variation and performance data is unambiguous, this is pretty straightforward. Most time is spent managing out the underperformers, recruiting high performers and coaching team leaders. The team leader and team members meet weekly to review dashboard performance metrics and identify opportunities to improve. Every meeting starts with the question: “Do the processes we follow continue to serve the company’s overall purpose?” The discipline of asking this question at the beginning of every meeting is a continuous reminder that process must serve business outcomes. It can never become an end in itself. Team leaders might occasionally invite people from adjacent domains to give and receive coordination feedback.
Cross-Functional, Low Variation Domain Teams
In the case of a cross-functional, low variation operational domain team such as sales development or accounts receivable, the role of functional management is similar, even though the structure is different. In this permutation, the leader that oversees the system must ensure the domain’s purpose remains in alignment with system and enterprise purpose. Take accounts receivable. The accounting system leader has primary responsibility. He has a light dependency to the revenue engine system leader. That’s because customer success reps, who are primarily focused on the customer success domain inside the revenue engine system, are also represented on these accounts receivable teams in a secondary role. Like this:
Low variation domains are targets for automation. Low variation workflows can often be automated via vendor platforms. In such situations, it may be necessary to add a TechOps person (SalesOps, MarketingOps, AccountingOps Customer SuccessOps, etc.) to an operational domain team, at least in order to select, implement, socialize and stabilize the platform. But if the automation opportunity requires software to be built, then you need a technical domain team (a development team) to build it. In such situations, the organization design would look something like this:
Notice that for this to work, leaders in multiple functions and systems must become comfortable sharing resources, working together to ensure domains are working towards aligned business outcome objectives and delegating responsibility into domain teams.
High Variation Domain Teams
All development teams are high variation teams. A high variation team is one in which there is high variation in the nature of the work itself, or in the specific problems to solve, or in the hypothesis-test-iterate cycle. Development teams are cross-functional, but high variation domain teams can be uni-functional. An example of a uni-functional high variation team is an email marketing campaign management team, or a corporate development team, or a legal team. The executive team is a cross-functional high variation team. So is an account based sales team that combines SDRs and account executives to qualify and sell to prospects. In high variation teams, the difference between an average worker and a top performer can often exceed 10X. That’s why functional and system leaders (and CEOs) must work especially hard to increase the density of high performers in high variation environments.
The effectiveness of a domain team depends in no small part on its leader. I discussed self-organized teams and team leadership in Chapter 6. Suffice it to say that a leader is responsible for building a high performing team, one in which members combine advocacy and inquiry, make data driven decisions and seek to align the local optimum within their domain with the global optimum of the enterprise’s overall purpose. In parallel, functional and system leaders must work to uplift the competency of team members through coaching, performance management and surgical replacements when necessary. The density of high performers on a team is the single most significant factor in success, especially in high variation domains.
The Role of the Hypothesis in High Variation Domains
Domains improve by application of the scientific method. You generate a hypothesis, you test it, you document what happened, you prove it right or wrong, and if wrong or only partially right, you iterate. You keep iterating until you achieve the result you seek.
Some domains exhibit high variation, and others low variation. While all systems and domains exhibit some variation, the highest variation all is in the product discovery system, where value breakthroughs happen:
The job of every domain team is to narrow variation and close in on “truth.” For product discovery domain teams, “truth” means proof of product /market fit. For a product management domain team, it means increased conversion, or increased feature engagement. For a marketing campaign management domain team, it means increased clicks and web landing page impressions. For the sales opportunity domain, it means an increased bottom-funnel close rate.
Variation measures the degree of risk that your hypothesis may be wrong. High variation occurs in fast changing environments, or where the problem domain is new to you, or where the problem itself is constantly changing. Whenever variation is present, it means you haven’t yet confirmed how to break through. You start with a hypothesis. This first step, the creation of a hypothesis, is where low variation and high variation domains are different. Low variation domains (such as accounts payable) generally have straightforward problems. The process improvement hypothesis is usually self-evident, quickly proven true. Not so in high variation domains, where more testing and iterating is usually required.
Consider the most important high variation domain of all, the product discovery system. Why is it so important to have 10X high performers on high variation teams such as new product teams? The answer is that 10Xers generate better hypotheses, then conduct more effective execution of the scientific method (data-driven hypothesis testing, followed by iterating and optimizing), and then (once truth has been discovered) implement and scale better than most.
In his book, Zen and the Art of Motorcycle Maintenance¹, Robert Pirsig wrote about the eureka moment where a brilliant hypothesis springs to mind. A problem solver might spend days or months pondering a problem. He tests, prods and ponders it subconsciously, in a state of what Pirsig calls “lateral drift.” And then one day, walking down the road, she has a flash of insight. Where once there was murkiness, now she can see to the bottom of the lake. Pirsig argued that hypotheses emerge outside of (and before) rational cognitive thought. They happen at the intersection of quality and caring — when a highly skilled person cares deeply about solving the problem.
10X product managers generate better hypotheses, leading to product value breakthroughs. 10X software architects generate better hypotheses, leading to more efficient, modular and scalable systems. 10X sales executives generate better hypotheses, leading to bigger and faster deals.
Yes, after the hypothesis the scientific method still must be followed. That’s process. But in high variation domains, the breakthrough starts with a brilliant hypothesis. The secret to rapid improvement in a high variation domain is to stock the domain team with 10Xers.
Moving to Domain Teams — A Story
It’s helpful to bring this all to life by an example. Let’s assume there is a company that has not yet organized its accounting department into domain teams. Consider this high level map of its accounting system:
This map shows the accounting system’s essential stocks and flows. We can see its people, workflows, technology and money flows. That’s the point of a map. Its purpose is to communicate. When you share a map like this with others who work inside the system, they will provide feedback. The feedback will help you improve the map. Eventually, you become confident the map includes all major stocks and flows. Mapping the system is the first step.
Let’s say there’s a problem in Accounts Receivable. As measured by Days Sales Outstanding (DSO), AR looks stable, but a bit high:
But your VP Finance is also tracking the percentage of accounts over 90 days outstanding. He finds this:
The additional time graph has exposed a rising problem. While most customers have shown an improving trend, causing the previous graph to look stable, the improvement is offset by a growing percentage of customers who have fallen past 90 days outstanding. By tracking this additional stock (% past 90 days outstanding), the VP Finance has gained an insight he might not otherwise have received.
But let’s keep going. Let’s say your VP Finance is also interested in understanding employee motivation and energy. He knows the power of feedback loops. He has initiated a monthly “Net Promoter Score” for his team: “Would You Recommend this Job to a Friend or Colleague?” Here’s the data for the Accounts Receivable group:
As he reads through the comments in the survey, he notices a pattern. Multiple comments reference the fact that as the volume of customers has grown, the staff has stayed the same. The new tool they were promised to automate follow-up emails with customers hasn’t arrived. Since individual performance is measured by the DSO of each rep’s assigned customers, workers have been focusing more of their efforts on the “easy” customers and reducing time they spend to collect from the “complicated” customers. They don’t feel good about this. They know they are allowing some of their customers to slide towards collections.
But they feel they have no choice. They don’t have time to do everything, and they don’t have time for complicated conversations. A new supervisor has instituted a rule that every AR rep should make at least five calls per hour. This has caused reps to feel rushed, and to be abrupt with customers. They feel understaffed, overworked and disregarded by management. Motivation is low, and energy is flagging.
Because the VP Finance is In the Loop, he immediately sees two system archetypes at work — “limits to growth”, and “seeking the wrong goal.” His mind hops quickly to solutions, but on second thought he recognizes that fixing the problem is not his only goal. His biggest goal — even more important than the DSO score itself — is to develop his team.
So he pulls together five of his AR representatives and charters a project. Four weeks later, the team presents its findings:
- 72% of customers at >90 days outstanding indicated they had unresolved product and service issues
- Only 12% were non-responsive or resistant to paying their bills
- After calls were arranged and held with customer success representatives to resolve outstanding issues, 76% of the >90 customers agreed to negotiated resolutions of the outstanding charges within two weeks
- The number of customers assigned per AR rep has increased 45% year over year
- AR reps perceive that their new supervisor is fixated solely on the average days outstanding number and calls per hour. In their view she hasn’t been open to recognizing the linkage between customer service issues and the average days outstanding
The team shares a graphic showing the feedback loops they observed in the system. A reinforcing feedback loop of escalating sales and revenue has been disrupted by a balancing feedback loop:
Note that the team’s feedback loop graphic does not specify stocks and flows. It just shows the dynamic cause and effect relationships at play. That’s OK. The purpose of a feedback loop is to communicate; if it accomplishes that goal, its job is done.
Here are the team’s recommendations:
- Create a balanced scorecard for the AR department: customer net promoter score, average days outstanding and percent of customers >90 days outstanding
- Publish the scorecard weekly, not monthly
- Break the 21 person AR department into three teams of 7 each, track the “balanced scorecard” performance of each team, and initiate team performance review meetings each week — to go over the trends and identify ideas for improvement
- Ask the team with best balanced scorecard performance each month to present best practices at the monthly AR department meeting
- Initiate a buddy system for new employees
- Stop focusing on calls per hour per AR rep
- Change the supervisor’s role to focus on training and team support
- Change the performance review process to include peer as well as supervisor feedback
- Hire one analyst to create an email follow-up campaign — with aggressive testing of content, headings, sequencing and message segmentation; analyst would also provide teams with ad hoc data to further analyze patterns
- Create a Slack channel in which a running tally of customer-communicated product issues is maintained; encourage Customer Success department to participate
- Hold a monthly meeting between AR, Customer Success and Product to review product-related feedback received by the AR and Customer Success Teams
- Initiate a technical domain team that can focus on automating AR steps wherever possible
If the recommendations were to be followed, the new organization design might look something like the following. There are now two development teams in the picture. One is focused on automation of the low variation work of the accounts receivable domain team. But the second development team is at work on the product itself, working to solve the problems that are causing accounts receivable issues:
As you can see, now the leadership coordination burden is higher. Systems leaders for the accounting system, revenue engine system and product management system must stay in sync with each other to ensure the individual domain teams are focused on the right things. And the technical and operational domain teams must now be in regular communication.
Consider the decision to launch the project team, and the team’s recommendations. The VP Finance and the project team members must have been systems thinkers. In her book, Thinking in Systems², Donella Meadows identifies twelve levers of systems intervention. Meadows presented the levers in the following in order, ranked by degree of power to effect change. Consider how the VP Finance and the project team in the above example took advantage of these intervention levers. Let’s review how each of these systems intervention levers were used in this example:
- Transcend paradigms — the capacity to always remain flexible, open and unattached to any one paradigm
The Example: Notice that the first thought of the VP Finance was to solve the problem himself. But he then realized his goal was larger than just fixing the problem — he wanted to develop his team. Fixing needed to be paired with learning. This led him to charter a frontline-involved project. He changed his paradigm. If he were to routinely approach problems with this fresh, open approach, always willing to change his mindset if the situation requires, then he would exhibit a capacity to transcend paradigms.
2. Change the paradigm — the mindset out of which the system (goals, structure, rules, system delays and parameters) arises
The Example: The project team rejected the notion that the only thing that mattered was average days outstanding. Nor was it deemed sufficient to add the >90 days measure. They insisted on including customer Net Promoter Score — in other words, customer satisfaction. This was a new paradigm. It increased awareness that AR days outstanding is correlated with customer satisfaction. They also proposed a new organizational design — three domain teams. Also a new paradigm.
3. Change goals — the purpose and function of the system
The Example: The team’s change in measures alters the goal. The goal now becomes to reduce days sales outstanding, reduce the percent of >90 days outstanding and increase customer satisfaction. All three matter now. By revisioning the AR department as a stakeholder in customer satisfaction, their interactions with customers can be seen in a new light — as focused on improving the customer experience. This also opens the door to AR’s involvement in Product feedback discussions.
4. Increase self-organization — the capacity of a system to evolve itself
The Example: When the VP Finance chartered the project team, he delegated authority to solve the problem when he could have solved it himself. The project team delivered a more holistic set of recommendations than he would likely have developed on his own. Because the solution came from the people who do the work every day, there was more ownership. More importantly, the team learned — team members developed new competencies. And one of its recommendations, to reorganize the department into autonomous, self-organized domain teams, makes the change to self-organization permanent.
5. Change rules — incentives, punishments and constraints
The Example: The project team recommended eliminating the five calls per hour requirement. This shifts the focus from volume to quality; from output measures to outcome measures.
6. Change information flows — the structure of who does or does not have access to information
The Example: One recommendation was that the balanced scorecard data be shared weekly with each of the three teams, looking just at that team’s customers. By giving these self-organized teams access to time graph data on the key stocks for their customers, they are empowered to continuously improve performance and “own the result”.
7. Change reinforcing feedback loops — the strength of the gain of driving loops
The Example: The team with the best performance each month gains department-wide recognition through presenting its best practices to the whole department. As each team strives for this recognition, it sets up a positive reinforcing feedback loop.
8. Change balancing feedback loops — the strength of the feedbacks relative to the impacts they are trying to correct
The Example: The project team recognizes that accounts receivable performance is in part dependent on customer satisfaction, which in turn is dependent on product performance and the level of support provided by the Customer Success department. But rather than just accept this as a constraining balancing feedback loop, the team proposes creation of a Slack channel to capture product issue feedback, and a monthly review meeting to be held between Product, Customer Success and AR so that Product can consider the feedback in its product road map decisions. This is a great example of systems thinking — the team recognized its own participation in the entire system, not just its functional department.
9. Reduce delays — the length of time between an intervention and evidence of the system’s response
The Example: The team recommends that the balanced scorecard be published weekly vs. monthly. This allows teams to see the results of their efforts more quickly so they can adjust more quickly where appropriate.
10. Change stock-and-flow structures — physical systems and their nodes of intersection
The Example: The team recommended only one new hire — an analyst. The first job for the analyst would be to create an email follow-up campaign encouraging customers to pay their bills on time. The team theorized that this automated campaign would reduce the required number of custom emails and phone calls, making AR reps more efficient. Introducing the automated email campaign is a change in the stock-and-flow structure.
11. Change buffers — the sizes of stabilizing stocks relative to their flows
The Example: The team did not recommend any new AR reps, which would be an increase in the “staffing capacity” buffer.
12. Change parameters — numbers against which things are measured
The Example: The team did not specify a hard and fast numerical goal for the three measures in the balanced scorecard; rather, they made a set of recommendations designed to encourage the three teams to continuously improve these numbers
Domain interventions start with tracking stocks. That’s how problems and opportunities are discovered. But solutions live in the world of flows and feedback loops. Flows are business logic. They bring together the roles of people, the workflow steps, the inputs and outputs of technology, and the flow of money. A flow receives inputs from one stock and delivers outputs to another stock. Flows are improved through process redesign, automation, increases or decreases in money flows, and effective implementation.
Perhaps the biggest improvement that can be made to a system is to increase the quantity and quality of tracked feedback loops. Feedback loops impact flows like control knobs. The information contained in a feedback loop impacts behavior. Leading indicators are especially powerful feedback loops. These are the early sensing signals, highly correlated with downstream outcomes. Leading indicators will provide you data that can quickly verify the impact of an intervention. The faster you can accurately predict the outcome, the faster you can fine tune. Often the most important feedback loops are those that track the competency, motivation, energy and staffing levels of employees.
Latency in feedback is a problem. It can cause people to overreact, causing oscillation to occur. When delays are unavoidable, it’s important for system actors to be patient and wait until clarity emerges. Don’t overreact. Remember the physician’s maxim: “first, do no harm.”
The Problem With Lags and Oscillation
In The Loop leaders work hard to put in place a diversity of feedback loops. By gaining feedback across multiple vectors (customer satisfaction, employee motivation, workflow status, financial performance, etc.) leaders and domain teams develop a more robust and holistic understanding of reality. But even with feedback loops in place, lags can exist between a decision and its execution, and between an action and its result (between cause and effect).
Consider the following examples of differences in time-to-completion windows:
Can happen quickly (days)
- If available, increasing the flow of money into the system (such as a new incentive bonus)
- Decreasing the flow of money (such as reducing marketing campaign spending)
- Firing someone
Takes some time (weeks)
- Hiring someone
- Implementing a significant workspace redesign
Takes more time (months)
- Uplifting motivation
- Uplifting competency
- Adding new competencies
- Finding an external partner
- Implementing a significant workflow redesign
- Changing or dropping a vendor tool
- Adding a vendor tool
- Moving to a new office
- Architecting, building and deploying new software
Takes a lot of time (months to years)
- Architecting, building and deploying a new technical platform
- Architecting and building a new facility
If you can gain quick impact without negative downstream effects, or if quick impact is vital despite downstream effects, then you pursue the “can happen quickly” or “takes some time” initiatives. But often the change that must happen can only be achieved with initiatives that take more time.
Cause and effect are often separated in time and space. Examples of interventions with significant outcome lags include:
- Impact of VC funding on company growth
- Impact of product discovery investments on achieving product / market fit
- Impact of product changes on retention
- Impact of a technology change on productivity
- Impact of an executed business development plan on closing new partnerships
- Impact of an executed enterprise sales plan on the rate of sales closes
- Return on invested capital
- Impact of an audit on future accounting practices
- Impact of increased payables focus on average days outstanding
- Impact of marketing campaigns on sales
Regardless of time horizon, interventions are more effective when they are multidimensional and holistic. An intervention that closely integrates and aligns changes in people (their roles, competency, motivation, energy and staffing) with the changes made to workflows, technology and money flows is likely to be much more effective than a change in just one component.
In situations where intervention and outcome is separated by significant time, the risk of oscillation is high. It takes an In The Loop leader to see the feedback loops at play, to track the system’s evolution over time, and to accept the inevitable delay between intervention and impact. When awaiting the response of the system to an intervention, doing nothing is often the best strategy.
Interventions are never “one and done.” Once your intervention’s result is clear, follow up with fine tuning actions or, if necessary, a redesign. You can always improve. The work of continuous improvement is, after all, continuous. That’s In the Loop thinking.
- Pirsig, Robert M. 1974. Zen and the Art of Motorcycle Maintenance: An Inquiry Into Values. William Morrow and Company
- Meadows, Donella. 2008. Thinking in Systems: A Primer. Chelsea Green Publishing.