tl;dr: Self serve and product-led growth requires engineers to implement a lot of analytics and tooling to be successful.
In the past two years ShiftLeft has gone from a sales-led company to a self serve product-led company. My colleague Alok has written several blogs about what we used to do and what we do now:
With self serve, users are able to sign up for our product, use it with demo apps or their own apps, invite their team, self-upgrade to a premium trial, and try out integrations and APIs without interacting with a single person from ShiftLeft. This is great for users discovering our product, but now we have a problem: we need to know what our users are doing.
This is where growth engineering comes in. Growth engineering is about implementing the systems and tooling to understand user behavior and improve the product, and improve internal processes to make sales more effective.
Past readers of this blog know I come from a monitoring/observability background. After our self serve release, I went from working on observability for our backend services and databases to customer observability: understanding how our customers use and get value from our product.
There are two ways of implementing customer observability:
In addition to understanding what individual customers are doing, we also need an aggregate understanding of our self serve funnel. We adopted Dave McClure’s “pirate metrics”:
Measuring the full funnel metrics and dropoff rates are incredibly useful to understand where to improve the self serve process.
The challenge is getting all of the data in the same place. Furthermore, you can’t just have individual metrics. You need to have the individual metrics and be able to filter and group by arbitrary properties. You need to be able to answer questions like, “do GitHub signups have better activation than email signups?”
At ShiftLeft we use Amazon QuickSight with funnel data coming from several data sources joined together: the product DB, Redshift (populated by Segment), and HubSpot. This means we have a single view of data from outside the product and inside the product.
This allows us to get comprehensive insights into our customer journey. For example, we’re able to track which blog posts lead to the most signups, and for those signups, we’re able to continue tracking to see which ones end up at activation or retention and eventually convert to customers.
Monitoring and observability only gets you so far. Some insights require action, and more importantly, many actions can be automated. At ShiftLeft we have a variety of automated workflows that are defined in code and in tools like Zapier. A few of the things (among many) we monitor and have implemented workflows for:
The third workflow is actually very complicated. Scans can break for any number of reasons, and it’s important for the engineering team to be notified so we can take a look and triage. Scan failures are rare enough that we can create a GitHub issue for each one. Depending on the failure, we either need to simply retry or there needs to be fixes to our code analysis pipeline that need to be deployed to handle an edge case.
The workflow is meant to make troubleshooting as easy as possible. The GitHub ticket is created by Zapier, and includes details about the account (i.e. is it a free user’s upload or a customer’s?) and some quick links to help debugging (e.g. direct links to Kibana already filtered to the scan and time range). The Zap also pulls on-call rotation data from PagerDuty to assign the on-call engineer to the ticket. The Zap also keeps some state in Airtable to do some deduplication: repeated failures for the same app show up as comments in the same issue, not create a flood of new issues.
These types of workflows are easy to implement (and with tools like Zapier, require no code or deployments) and provide a lot of value to the engineering team without having to worry about writing and maintaining similar services ourselves. We can iterate on workflows independently of our product.
So far all of the systems I’ve described are only available to the engineering team (and the product team in some cases). Eventually sales and marketing need to know about the leads and prospects they are working with. Sales and marketing teams use HubSpot and Salesforce, and all of the customer observability metrics I’ve described eventually have to make it into those systems. Segment allows you to send events and contact properties to both destinations.
In addition to Segment, there are several companies building solutions to sync customer data (also see reverse ETL):
“Growth engineer” is an interesting new role in PLG companies. It involves a deep understanding of the company’s product, experience with data engineering, and an understanding of sales and marketing. This is a very odd intersection of skills and knowledge! In a recent conversation with a VC I asked, “where can startups find people with this skillset?” and the answer I got was basically “they can’t.” Usually this role is assumed by an engineer who is already at the company.
In hindsight I have basically been wearing the product growth engineer hat for several years! If you’d like to chat about growth please reach out on LinkedIn or Twitter.