In a recent post, I’d written about the hiring problem of the modern data stack. The people problem extends well beyond hiring — resistance to change as well as a lack of ownership and collaboration are other major challenges I’d like to address today.
Change: Technology Buy-in and Adoption
Data is evidently changing every aspect of the business landscape, and at the macro level, companies have come to terms with the fact that the only way to stay relevant, let alone thrive, is to build a strong data foundation.
At a micro-level though, change is affecting individuals of every rank at businesses large and small. People’s roles and responsibilities are changing, KPIs are changing, the way products are sold is changing, and of course, the tools people use are changing big time.
And while I can argue all day how change is good, very few people are open to changing the way things are done — aversion to change is widespread and problematic. To stay within context, I want to focus on the organizational challenges that arise when new technology needs to be adopted.
This is one of the most pervasive challenges today — getting buy-in for purpose-built tools from whoever’s in charge. A lack of budget comes up very often which, I believe, is just an excuse.
The real issue is that people are scared of losing control — they’d rather stick to legacy systems as long as they don’t have to expend their energy learning something completely new. And it is understandable, not everybody has the time or the mental bandwidth to learn new technology.
The problem is exacerbated due to the modern data stack being a complex web of tools with overlapping capabilities, and some even spanning multiple layers of the stack.
The solution, however, doesn’t have to be as complicated. I believe a good start is to make it dead easy for everybody, including folks from data-adjacent teams, to understand the purpose of each layer of the data stack — the problems they address, the benefits they offer, and the teams they cater to.
Vendors have a big role to play here which I’d like to cover in a future post.
Getting buy-in is a hard, albeit, one-time thing. Getting people across teams to adopt new tools and technologies is a very hard, ongoing thing.
Serious resources are spent evaluating and implementing tools, but enablement is often an afterthought. It’s not uncommon for companies to spend thousands of dollars a month on tools that literally nobody uses (it’s true).
It has never been more crucial for companies to think about how to prepare and equip teams with the training and the resources they need to successfully adopt new tools and use them effectively in their day-to-day.
Ownership: Buying, Implementation, and Team Structure
Who owns the buying process? Who is responsible for implementation? Who takes care of ongoing support and enablement? And who takes the onus when something goes wrong? How are data teams structured? Centralized or distributed?
Lots of questions here with no concrete answers.
Data tools are unlike most other SaaS tools where the buyers are also the users — Engineering buys and uses their tools, DevOps buys and uses their tools, and Go-to-market (GTM) buys and uses their tools.
Data tools are used, directly or indirectly, by multiple teams and sometimes by the entire organization.
The universal nature of data tools creates friction in the buying process, I’ve written about this before — there's a divide between GTM and DE teams. And this struggle even extends to sellers — companies that make data products are constantly trying to ascertain who the ideal buyer is.
People seldom pay as much heed to implementation as they do to the buying process; however, implementing a set of tools in a tightly integrated fashion is anything but trivial — an individual or a set of people needs to own the process end-to-end.
I have seen this first-hand and am sure you have too — GTM teams do a great job at evaluating a bunch of tools but assume that their job is done once procurement is over, leaving the implementation to data and engineering folks. Or sometimes GTM teams are not even involved in the buying process but are expected to start using brand new tools immediately after implementation, without any enablement or support.
In an ideal world, the implementation of data tools is a collaborative process with multiple owners representing both buyers and users who are also jointly responsible for enablement, support, and fixing breakages.
What does an ideal team structure look like? There isn’t a one-size-fits-all here and it depends on many factors but getting it right can help avoid frustration and fault-finding.
At the very least, companies that are investing in modern data tools need to create a bridge between the data team and the rest of the organization. This can be an individual, an ops team, dedicated ops teams representing each function, or maybe even a distributed data team where each core function gets a data person (or people) who handles the data needs of that function.
Fixing the people problem within your org is the most impactful thing you can do when it comes to building sound data infrastructure. There’s no shortage of amazing tools and technologies but they’re only as good as the processes and resources available for teams to derive value from those tools.
Oh and if you haven’t already, do subscribe!