Inspired by the Joel Test.
Growth Engineering is a growing profession these days. But before you accept a shiny new job as a Growth Engineer, you should figure out the state of the Growth org.
How? Glad you asked.Â
The Alexey Test
- Do they have proper A/B testing infrastructure?
- Is the codebase setup for experimentation?
- How long does code take to go from âdoneâ to âliveâ?
- Is quality tooling robust enough to keep you safe?
- How thorough us your Experiment Results Dashboard?
- Does your PM practice safe experiment hygiene?
- How in the weeds do Engineers get into the Data Stack?
- How many of the experiment ideas are coming from engineering?
- How many experiments does an engineer ship per quarter?
- Is the company at the scale that theyâre ready for Growth Engineering?
- Are teams fully empowered to move the metrics they own?
1. Does the company have proper A/B testing infrastructure?
Some surprisingly large companies have internal teams still using userId mod 100 to bucket users into experiments. These days, that is no excuse, as a number of quality open source solutions are widely available, not to mention mature solutions like Optimizely Full Stack and shiny newer entrants like Eppo, StatSig and GrowthBook.Â
A full feature comparison is beyond the scope of this post, but at the very least, make sure that the potential employerâs framework includes client-side hashing (so no API calls to compute bucketing), custom audiences (âonly run this test on paid traffic in north americaâ) and an ability to override bucketing via a URL parameter (otherwise, manual testing will be a nightmare).
2. Is the codebase setup for experimentation?Â
Having to reinvent the wheel with every experiment would make your velocity grind to a halt.Â
In an ideal world, I would expect an experiment-friendly codebase to have helpers for things like getExperimentVariant(name, subjectId), as well as front-end specific component wrappers like (for React)
Iâve also seen approaches where all experiment-related code lives in a separate /experiment directory, acomodating for a clear âhave we productionized this yetâ separation of concerns, as well as the ability to enforce different code coverage and style standards for code that is, at least for now, considered throwaway. It also makes it easier to see when an experiment from a while ago still hasnât been cleaned up.
Finally, having a proper front-end component library makes a huge difference for front-end growth velocity, since it means engineers will spend more time in components and much less time implementing custom CSS.Â
3. How long code does take to go from âdoneâ to âliveâ?
Growth lives and dies by the number of bets it gets to take, and the ability to iterate on those bets. When working in a web surface area, daily (or, ideally, continuous) deploys make a huge difference for how many iterations of an experiment a team can attempt within a quarter.Â
Native (iOS / Android) growth teams face a true uphill battle when it comes to fast iterations, since app store deploys often run on a weekly (or even slower) cycle, and including the approval process, an experiment can languish in a âdone but not liveâ state for as long as a month. Workarounds usually include a live code push of some sort, either via React Native Code Push Flutter, an SDUI, or my personal favorite, webviews .Â
4. Is quality tooling robust enough to keep you safe?
Growth optimizes for going fast. Going fast means slightly more willingness to break things. But, uhh, breaking things is bad and should be minimized. What does the team do to prevent (or minimize the impact of) breakages?
Are key metrics monitored and triaged via an on-call rotation, using something like PagerDuty? Is the automated test suite sophisticated enough to verify whether any of the key tests start failing with a specific subset of your experiments? Are Error Boundaries preventing an entire page from 500-ing because some below-the-fold experiment component is crashing?
5. How thorough us your Experiment Results Dashboard?
Having to recreate an âexperiment resultsâ query for every single experiment is a painful place to be. Not just because itâs repetitive busy work (though it is). Thinking through âwhat kinds of cuts do we need to make sure to keep an eye onâ (mobile vs desktop, paid vs organic, new vs returning) is a critical part of reviewing every experimentâs results. The Experiment Results view is one that grows in sophistication over time, as the team learns about the peculiarities of its customers and product.
6. Does your PM practice safe experiment hygiene?
Donât peek at your experiment results is the sort of mantra that Data Scientists oft repeat and Executives ignore, just as oft. Left to their own devices, everybody will eventually game the metric you are holding them accountable for.Â
There are any number of ways to ensure that your team is being kept honest about its actual impact, from holdouts to re-runs to controlling for the winnerâs curse. A good indicator for intellectual rigor here will be around whose job it is to âgradeâ a teamâs impact - is it the PM, or a (less) biased outside stakeholder on the Analytics team?
7. How in the weeds do Engineers get into the Data Stack?
âI think we have Snowflake, but Iâm not over there often, thatâs for analysts and PMsâ not a good thing to hear. Growth engineers that are fully empowered to be impactful tend to spend a non-negligible amount of time poking around at user behavior, whether making custom cuts of experiment results or checking relationships between various user actions.Â
This is, in fact, also the work of Analysts and PMs; the difference is, if an engineer canât self-serve to sate their own curiosity, theyâll have a harder time getting on an Analystâs backlog. And a growth engineer that canât follow their own intuition is not going to have nearly as much to contribute to the roadmap.
Which brings us to
8. How many of the experiment ideas are coming from engineering?
Itâs too easy to quote John Egan here, but he really nails it:
- On my team, I try to instill a sense of ownership where engineers act as a mini-PM for their projects. Engineers are responsible for their experiments beginning to end, starting from writing the doc about why we are running the experiment, implementing it, doing the final analysis and finally, making the recommendation to ship or not. They are also responsible for coming up with ideas to further increase the impact of their project beyond what was originally scoped. They are empowered to propose and run experiments on what they think will further increase its impact.
- http://jwegan.com/growth-hacking/3-habits-highly-effective-growth-engineering-team/
A scenario where the PM, Designers, Engineers, and external stakeholders all contribute to the roadmap is a happy mix. One where the PM is largely responsible for all the ideas is not. Steer clear of places where the role of the Growth Engineer is merely to implement the experiments.
9. How many experiments does an engineer ship per quarter?
Growth is a numbers game. At some point, quantity develops a quality of its own. Having good experiment tooling and a strong balance of smaller versus bigger projects should, in an ideal world, mean that an engineer is able to ship a new experiment into production once every couple of weeks. Anywhere in that order of magnitude is reasonable. If an engineer only typically ships one or two experiments a quarter, how much about user behavior are you really going to be able to learn?
10. Is the company at the scale that theyâre ready for Growth Engineering?
Companies will sometimes say things like âweâre having some problems growing our userbase, letâs bring in a growth team.â If only it were that simple.
Realistically, a company needs Product Market Fit before optimizing customer acquisition is going to make a difference; otherwise youâre optimizing your ability to sell a product that your customers ultimately are not interested in; your time is better spent figuring out the right product.
Similarly, running lots of experiments requires a non-trivial amount of customer traffic; for B2B companies in particular, your traffic may never quite get high enough that tactical experiments will ever reach statistical significance. Â
In these cases, itâs simply too soon for a dedicated growth engineering team.
11. Are teams fully empowered to move the metrics they own?
Seasonality and organizational boundaries are the two demons you have to battle to have sensible metrics.
Subscribe to Engineering Growth
Stay up to date on new essays and updates on Growth Engineering.
Seasonality
A growth team that owns a metric like ârevenueâ for a gift-forward product are going to look like geniuses around Christmas and doofuses in January, year after year. A stronger metric more in their control would be something like âlanding page to purchase conversionâ or âyear over year growth.â While it is useful to keep an eye on global metrics, ultimately the scope of a teamâs true north metric needs to match the scope of its charter.
Org Boundaries
Companies ship their org charts. In practice, this means that trying to work in a surface area owned by a different team - whose own true north is unrelated to yours - is going to be an order of magnitude harder than working on code that your team owns.Â
Be wary of joining a team that has to hitchhike into othersâ code to get its work done; the amount of politics and begging others for attention may exceed the amount of time actually working.
Tags: #growth-engineering #engineering-management #advice-for-engineers