top of page

Feature Factory



Computers set out in lines like a factory. The image is drawn in green, yellow and white in an illustrated style.

Product-Engineering alignment is important for a successful organisation. It's what ensures you're building the right thing, in the right way, at the right time. If this alignment is out, if one team has too much power or there's no shared vision, or information doesn't flow freely between the teams, you're not going to get the best outcome.


Over my career, I've noticed some common (anti)patterns that come about when that alignment is off. In some cases I just endured them as I was too junior to know what was wrong, never mind how to fix it. In other cases I actually helped perpetuate the situation 😬 even as I tried to fix what I thought was wrong. Finally, I learned how to identify the root causes and get teams moving in the right direction.


This is my first post in the series and talks about the "Feature Factory" anti-pattern. This is a tricky one as (especially to more junior folks) it looks like the problem is with the product team being too powerful. In my experience though, the pressure for ever more features comes from outside the product team.


What is it?

Feature factory is when your engineering teams are constantly working on new features. The pressure to deliver more customer value, more often, is relentless. Every sprint is more and more new features and the backlog is never ending.


Why is it bad?

This might not seem like an anti-pattern, delivering new features is good right? Except that there’s no time to evaluate how those features are doing and nothing ever gets removed so the product gets more and more bloated and harder to use.


In this type of culture you also get a lot of technical debt building up as corners are cut to get features out faster which eventually slows down development to a crawl. Even if you don't get lots of technical debt, the sheer weight of features can lead to unexpected interactions and fragile architectures, where adding in a new feature breaks existing use-cases, again slowing down development. This leads to high staff turnover as people leave to go work somewhere more dynamic.


How to recognise it?


Some signs you might have a feature-factory culture:

  • There is little or no experimentation

  • No features are ever removed from the app

  • Technical debt and platform improvements are never prioritised

  • Customer metrics are poorly understood or don't exist.


What causes it?

At first glance this seems like an issue with the product team, but in my experience the root cause typically comes from outside the team, from senior leaders or stakeholders. I’ve found this is often down to interpreting "customer value" as "new features delivered" and it results in the product team being measured on number of features which incentivises the counter-productive behaviours discussed above.


How to fix it?

My advice is to get curious, experiment and re-evaluate as you try to find out where are why the pressure for new features is coming from. What you’re aiming for is to convince people that features are not a great measure customer value.


In my experience, metrics are the tool to employ in this situation. I’ve found that where engineering teams are turned into feature factories, the metrics being collected (if any) are not very detailed. Maybe customer satisfaction is measured occasionally or some feedback surveys are sent out, but nothing dynamic that you can see change day to day.


A good first step therefore, is to collect some more in-depth customer metrics. AARRR is a good place to start (Acquisition, Activation, Retention, Referral and Revenue). It's the last one, revenue that often does the convincing. If you can gather the data to show how that funnel looks in your org, you've got a powerful tool for convincing others that more thought needs to go into which features are selected. Once you have the data, start asking questions. What are the trends? How does your funnel drop-off from one stage to the next? How does that compare to others in your industry. You’ll likely start to see some interesting trends or features in these metrics - get curious about that big drop in activation or why your referral rate is so much lower than competitors. Share your results as well, make it visible where there challenges are and you'll start to get people coming to you to find out more.


Once you’ve got some metrics, you can start asking what you hope to change by delivering a new feature. What metrics will this move? If none, ask why you need it. There may still be a valid reason (e.g. regulatory*), but at least everyone is aligned on why. If the answer is “Steve says so”, go ask “Steve” or “Steve’s” team. There’s usually a real reason underlying the ask and by understanding it, you either agree the feature needs doing or will be able to offer an alternative way to meet the goal.


Next step is to measure. It can be hard to find the time to go back and look at features once they’re released, but in my experience, once you start defining features in terms of the metrics they’ll impact people get a lot more curious about wether or not that was achieved. People love a good chart!


The final step is to evaluate. OK, so that feature improved acquisition, but had a negative effect on referral, why? Can we live with than negative impact? If not, the feature needs to come out. If it did have a positive impact overall, great - why? What made this a good choice? What else might give similar results? These conversations become much easier to have when you've got the metrics


I’ve not talked specifically about tech-debt here, although often this does manifest itself in poor customer experiences e.g. slow page load times and buggy interfaces that drive metrics down. So you’ll find it becomes easier to prioritise working on that debt when it can be linked directly to customer impact. By linking customer issues, to exceptions to technical debt, a team I worked with made an excellent business case for spending a few sprints fixing up a problematic micro-service and were able to demonstrate a direct reduction in support calls as a consequence.


Looking at customer metrics can also help get platform and scaling work prioritised too. If you need to increase the acquisition rate by 20% - can the platform cope? What about 30%? Where’s the limit? What needs to happen to make that possible?


Finally, there are a lot of nuances to this and there’s definitely more than I’m able to cover in a blog post. If you want to find out more, I love working with product and engineering teams to improve the alignment and create a great customer-focused culture.


*Although even regulatory changes can often be tied back to customer metrics - it's likely if you don't implement that regulatory change your acquisition and/or retention will drop as the product is no longer fit for purpose.

Komentarze


bottom of page