One of the key factors that allows a company to survive, thrive, and grow, is nailing down their ideal customer profiles (ICP) and personas. However, several issues confound these two key coordinates to build the straight line to recurring revenue channels. Those data issues can be lumped into one or more of these three issues:
- Not a large enough data set
- Not enough data within the data set
- Too much data
Table of Contents
Dealing with Your Data Challenges
The first step in dealing with your data challenges is determining which one (or more) of the above three challenges you and your organization are encountering, as each of these issues needs to be dealt with in their own unique ways. However, the techniques to deal with them build on each other so that if you start out on the right path, you won’t need to retool later on if another issue arises.
Not Enough Data
The first question is how much is enough data to start making a data-driven informed hypothesis. The unfortunate answer here is “it depends.”
If you look at a massive total addressable market (TAMs), finding a statistically significant thin slice of ICPs and personas that would be willing to buy your product at your desired price can be very tricky.
While a larger company that is already selling into multiple industry verticals, sectors, and sizes may not have enough data to make statistically significant predictions, they could have enough information to recalibrate. On the other hand, a smaller company might not have enough data to run any meaningful analysis, so they would need to make a hypothesis-driven decision based on a small set of data.
This is what I ran into at FatStax.
FatStax Case Study
FatStax is s a digital catalog built for large companies with large field teams of field reps. When I started, our core targets were life science companies that sold very technical products to very technical teams. Our niche was an easy-to-use and easy-to-date platform helping sales reps become more specialized and thus show credibility with technical buyers.
We soon discovered that FatStax had a tiny contingent of traditional manufacturers using our platform. These were plumbing, HVAC, and construction companies. They initially bought small pilots; however, they would lean in when they realized the competitive edge we provided. Interestingly, while their initial purchase price was much smaller than those in the life science or healthcare industry their payments over the first and second year, and their lifetime value (LTV) were much higher.
FatStax did not have nearly enough of those initial manufacturing clients to make any statistical and data-driven predictions. However with the preliminary data, several qualitative interviews, and the team’s previous sales experience, when the decision was made to lean into the manufacturing sector, we started winning faster and far more frequently than we did in other markets. FatStax was soon acquired by the biggest competitor in the sales enablement space, BigTinCan, eight months after the shift towards the more traditional markets.
Not Enough Data Within the Data Set
The most common data issue that companies face is not enough data within the data set. What this means is that the data that captured was complete. Marketing best practices are that the more requested and required information, the less apt your prospects will be to comply or supply the correct information. Many times, I find client databases with incomplete data. (Which leads to incomplete client databases)
The first step is determining what data is required versus not required. For instance, with Duo Security, we sold two-factor authentication, so the number of employees was a great indicator of how much we could earn by winning the business. Company employee count is a relatively easy number to obtain. At FatStax, however, we sold to companies with large sales teams, so our metric was not just sales team size but field sales teams, a non-trivial metric to search on. This second case meant that even after running a search on LinkedIn, SalesIntel, CloudLeads, Infotelligent, ZoomInfo, Slintel, or elsewhere, someone needed to triangulate and code the sales team size into our CRM.
When a company requests information, the only three bits required are; first name, last name, and company email. It is straightforward to go to any of the providers above or others and enhance the data.
The typical list of appended data that a company should be collecting on its companies aiding in triangulating on their ICP are:
- Company Name
- Company URL
- Company location (City, State, Zip code & country)
- HQ Phone
- Industry (SIC or NAICS code)
- Employee count
- Annual revenue
Based on how your company partitions marketing or sales verticals, such as with FatStax, we needed field sales reps; there may be others that are required for data enhancement. A common enhancement is binning any numerical values, such as employee counts or annual revenue, or common binning industries. More on this below when discussing dealing with a large volume of data.
As for the individual, the typical list of appended data that a company should be collecting is:
- Name
- Phone number (Direct, Mobile)
- Title
- LinkedIn profile
- Location (City, State, Zip code & country) – as they may be remote
From the individual data, you can then enhance the data by parsing the title data out into three different segments; seniority, area, and role. Where seniority is their level in the company (eg Manager level, Director level, C- or V-level), area is their department (eg finance, marketing, sales, etc..), and role is their speciality (eg security, education, government, ect…). In this way, with a fully parsed title and decent marketing or sales operations manager, your organization could then develop emails, landing pages, call scripts, etc.… specifically focused on those enhanced and fully parsed titles, or as I call them, thinly sliced ICP and persona. This “data append and enhance” protocol iss what I did at (for?) SkySync.
SkySync Case Study
When I arrived at SkySync as a director of inside sales, my first action was reviewing the data. What I noticed were holes. While we had many customers, we didn’t even have billing information in our CRM, which was in our accounting software. So I exported the data set and did an append and clean with ZoomInfo. The data that we did have, I removed and updated it from ZoomInfo.
Why? Systematic error.
If you append and enhance data from multiple sources, you will end up with some inconsistencies that will be random in their nature. A random error will wreak havoc on any data analysis you try to run. If you update data from a single source, you will end up with consistent inconsistencies, leading to systematic error. Therefore when you run your data analysis and you are off, you will be off a constant amount across your entire data set, so in the end, it won’t matter as much.
Two things came out of this “append and enhance” data protocol. First, we were able to develop a nurture campaign based on someone’s role and seniority. In this way, our marketing nurture emails were more relatable..
Second, we identified our SMB market as companies with less than five hundred employees and the lowest profit margin. We then directed the BDR team with a very prescriptive playbook for that segment. The result was that we were able to kick more SMB prospects out of the funnel relatively quickly, closing fewer deals from that segment; however, those deals had more revenue, leading us to increase both our average sales price (ASP) and revenue for that segment while decreasing the number of deals.
Too Much Data
Larger companies with tons of data or companies that can ingest a large volume of data to spike the ball in spray and pray type of outreach wrestle with the inverse problem of too much data. You may be thinking, why is this a problem?
First, with too much data, there is bound to be a ton of variability in the data. Typos aside, if one looks at the industry field, there are bound to be many complementary data. Looking at employee counts or annual revenue, there too a ton of variability will be found. Dealing with this is similar to dealing with unstructured data; the data needs to be structured.
Structuring these larger datasets requires similar binning data in supersets and possibly super supersets. If you are looking for employee data on LinkedIn, under the company page, those data are binned, like this:
- 1-10
- 11-20
- 21-50
- 51-200
- 201-500
- 501-1,000 …
Annual revenue is similarly binned.
Binning takes extensive variable data and turns it into smaller, easier to manage bits of data that can be drilled down.
With binned data, at Duo Security, we were able to identify a fast converting group that helped sales reps achieve their quotas.
Duo Security Case Study
As the first person in sales at Duo Security, I was overwhelmed with inbound leads. They came in from every direction, companies of all shapes and sizes, However, it was a martini glass funnel; lots of leads, lots of data, and lots of leads without any value or buying power. We looked at the industry data, and it was larg with lots of different industries, so trying to find one or two, in particular, was like finding a needle in the haystack. This is where we binned similar industries.
One bin contained all industries that fell under “professional services”. We noticed that this group showed a speedy conversion rate.
We then began to dissect this “professional services” bin to see any industries therein tilting the scale. While they did not make up more than a percent or three of the total funnel, what quickly emerged was a tiny set of fast-moving leads from the law firm vertical.
Looking more closely at them, we found that in these law firms, IT managers were tasked with keeping documents secure with a user base that was not particularly technical. A law firm typically had one or two people in charge of IT. They would evaluate, implement, and, most importantly, pay for any IT products. They were not the big-ticket enterprise deals we wanted, but they were a good source of easily accessible recurring revenue. What’smore, they bought quickly; we didn’t have to wait for the full thirty-day free trial to get them in the door; typically it was half that.
The result was that as Duo Security grew and we added sales reps, any rep who was in danger of missing quota was told to review their pipeline and territory for law firms. as By that time we had significant data that law firms were relatively easy and fast converting prospects.
The Next Steps
Being able to nail down your ICP and personas are two of the most important things your company can do to help growth. What’s more, with defined ICPs and personas, you can develop more focused sales and marketing strategies that will carry a more significant impact. In the next article, we review tactics to parse datasets, both large and small, and arm you with ways to start doing your own data analysis identifying those key segments and fast converting prospects.
Leave a Reply