The good thing today is that there are so many different tools and technologies available to solve a certain problem. The bad thing is that there are so many different tools and technologies available to solve a certain problem.
Go-to-market or GTM teams — product, growth, sales, marketing, and customer success — are the biggest beneficiaries of a sound data stack.
The buying decisions, however, are often taken by data engineering (DE) teams, leading to misalignment between the two camps.
Both camps want more control over their workflows — GTM wants a GUI to do more without relying on DE, but DE despises GUIs and wants to ensure that they don’t have to do the same thing twice. The divide is stark at B2B orgs where GTM relies heavily on product-usage data to personalize experiences at every customer touchpoint.
What GTM (non-data) wants
At the very least, GTM wants to be able to quickly visualize how the product is being used — where users are dropping off, what’s leading to activation, etc. — and personalize their outreach based on those factors.
To do this effectively and without relying on DE, besides a product analytics tool, GTM needs a visual segmentation tool to create and sync audiences to their activation tools, which they can do using a CDP or to an extent, a reverse ETL (rETL) tool with visual querying capabilities.
There’s another emerging category called PLG CRM that is trying to address this problem by enabling GTM to combine data from multiple sources, build segments, and trigger actions in activation tools based on those segments.
There are many companies in this category so I might be wrong but unlike CDP or rETL, PLG CRM doesn’t actually move the data downstream and is better suited to alerting use cases. Also, there’s evident overlap between the use cases addressed by PLG CRM and rETL.
Irrespective of the technologies powering the tools, being able to analyze and activate data is what GTM teams want, which is a reasonable ask.
What DE (data) wants
It’s important to acknowledge that DE caters to the needs of all teams and not just GTM teams — the scope of work for DE is larger which definitely influences the data tools they choose.
That said, DE is generally averse to tools that don’t neatly fit into their existing workflows and require them to build and maintain additional data pipelines — CDPs and product analytics tools are the most common ones that fall into this category.
The ideal scenario for DE is to maintain a single pipeline that starts with collecting data from all possible sources in a data warehouse, writing SQL to transform or model all types of data for all purposes, and making this data available in all the tools used for analysis and activation.
The data warehouse is the centerpiece of everything data teams do — setting one up is not negotiable and for good reason. Without a data warehouse in place, there’s really not much for a data engineer or analyst to do, especially if they wish to stick to best practices and use best-of-breed tools.
While it doesn’t take long to spin up a cloud data warehouse, ingesting data from various sources, transforming/modeling the data, and syncing the data to analysis and activation tools is not trivial — the entire process can take weeks or longer depending on the number of data sources and available resources.
Consequences for GTM
It’s reasonable to say that sooner or later, every company has to invest in the above process if they wish to maximize the value derived from data.
However, prioritizing the ideal process over the immediate needs of GTM teams can leave them in a state of flux.
They have little control over their daily workflows and every new analysis or segmentation becomes a request for the data team to fulfill.
They need to become SQL-proficient in order to experiment and iterate.
And if there’s a broken pipeline affecting data quality, which further distorts the customer experience, they might never know or discover it only after the damage is done.
The Middle Ground
There are many factors at play when evaluating tools — implementation, scalability, extensibility, and interoperability. However, the most important is value. A powerful tool with insane performance and capability is no good if nobody uses it or derives value from it.
Therefore, companies need to embrace a middle ground by scoping the needs of their teams and investing in tools that can fulfill those needs.
For smaller companies without a dedicated data engineer, it just makes sense to empower GTM with the tools they can use themselves.
Or even for bigger non-tech companies with large GTM teams.
Or even enterprises that have the resources to keep both camps happy and to invest in processes to ensure that data quality isn’t compromised.
If your organization’s priority is to build a data team, by all means, invest in best-in-class tools for every layer of the data stack. On the other hand, if the goal is to empower GTM teams to analyze and act upon data using the tools they’re comfortable with, let them have those tools and work backward to figure out the challenges that emerge.
After all, the modern data stack is about enabling people to use data to do their best work, isn’t it?