Risk Management

Risk Management at Scale Part 5: change management and automation

Risk Management 5.JPG

{If you missed the first four chapters of Risk Management at Scale, read those first and then come back here}.

You did it!  You’ve determined what your qualitative vs. quantitative data points are, developed ways to collect all the data, analyzed the resulting information, and built a framework to consistently provide you with cues to your customer’s risk. You’re all done, right?  The answer is: almost. Creating the process, no matter how simple, is just the beginning. Now you have to automate as much of the intake as possible and make sure your team follows the full process consistently. This requires habit creation through change management.

Hopefully, you’ve done yourself a big favor by automating as much of the data collection, analysis and interpretation during the framework phase.  This will help reduce the weight of change on your team. Any of the data your platform can produce and deliver into your CRM tool (like Gainsight or Salesforce) will also ensure that there’s no delay when key performance metrics drop for your customers.  Additional steps you can take to automate your data collection and interpretation are as follows:

If a computer can do it, make sure a computer is doing it

Yes, people are great at doing things that are complex and can “kill two birds with one stone” by interpreting data as it is entered into the system.  BUT humans are also fairly unreliable, through no fault of their own. The best part about a computer is that it will do exactly what you tell it to exactly when you tell it to.  (The only time this is not the case is when the computer is being told more than one thing and the things it’s being told to do conflict, or something changed what the computer is being asked to do.  Really.)

Break it down to speed it up

If you can break your data process into steps that a computer can do and steps that a human has to do, you’re better off.  And if you feel like a human has to do all of it, you haven’t broken the steps down far enough.  How is the data getting out of the platform you’re extracting it from? Is a person logging in and downloading it?  Can you set up a job to deliver a file to a folder or inbox on a regular basis instead? Do that. How is the data getting into the platform you use to analyze it?  Again, if you can run a job with a computer to ingest the data automatically, do it. Don’t know how? I guarantee it is worth your time, money and collaborative efforts to enlist someone from your engineering team, data team or even an outside specialist to set this up for you.  The time saved and the reliability you buy are well worth the expense.

Use the tools you have, but use them better

Only using Excel and don’t have a CRM?  Using Zendesk and Salesforce, but they don’t talk to each other?  Have Gainsight for scorecards, but haven’t set up automated scores?  Don’t worry - use what you have, but take the time to automate and optimize it.  Build a macro in Excel to pull in information from a folder on your shared drive.  Use the API connections that your software has (like Zendesk and Salesforce) to automate the process of updating data back and forth. Build a field that Gainsight can leverage to track your scores automatically. Don’t know how to do these things? Google is your friend.  It’s very likely that all the software you’re using has some kind of help documentation on the world wide web. Carve out a few hours, roll up your search sleeves, and follow the step-by-step instructions on how to accomplish what you need to do. Trust me, it’s worth your time now to save time (and ensure accuracy) later.

Reduce the amount of work your team has to do

Simplify.  Building complex models and multiple data sets is great, but can you get the same or similar results with less?  To be clear - the automated stuff can be complex as long as it is valuable and truly automated. When it comes to the components that your team has to enter, interpret or act upon, simplicity is best.  Take a moment to look at the aspects of your process that people are going to have to update or interact with - are all of these points necessary? Are you sure? If you can simplify, even just for now, you’ll see greater success during change management.

Great!  You’ve automated all the tasks that can be automated.  You’ve connected your systems to talk to each other and you’ve set up jobs for your product to run, ensuring data is delivered in a consistent, timely manner.  Now take a look at all of the people-run parts of the process. Document the steps; be clear and concise. Use screenshots. Draw boxes and arrows to indicate exactly what you want done.  Now put this documentation in a single, easy-to-find place and use it to train the team on the process.

Training the team is just the first step in the change management process.  But telling your team how to do something and expecting them to do it is not enough.  They will continue to do things the way they have been unless you move them to do it the new way.  This means consistency, clarity, and (nearly) constant communication from you.  

People hate change and the way things are, in that order.

That means no matter how much better the process you’ve devised is, it is far easier for humans to follow their old habits.  Habits are our lazy brains’ way of being efficient. Once a habit loop is formed, the only way to break it is to form a new habit on the existing trigger.  

So how do you change a team of people to adopt your new process?  You follow up with them. A lot.

Here are the steps we take to change manage teams (see our future post on Change Management for details):

  1. Acknowledge that change is hard

  2. Explain why the change needs to be done and what the intention is

  3. Acknowledge that the process may not be perfect

  4. Request that the whole team try to only use the new process and not use old methods for tracking

  5. Ask for feedback, then act on it

  6. Create visibility and give kudos for successful change

  7. Provide dedicated time for team members to walk through the process again

  8. Bring up the new process and request feedback during team meetings

  9. If a team member is struggling to adopt the new process, sit with them on a regular basis to help them get on board

  10. Continue this for at least three months, or until it is clear that all of your team members are comfortably following the new process

The worst thing you can do is roll out a new process only to find out that no one is following it. Especially when it comes to risk management and health scores, you MUST have consistent and reliable data on your customers.  Getting your systems in order to automate as much of the process as possible is a necessary step in the journey to customer retention and risk management.  Ensuring that your team have formed excellent habits around maintaining health scores and following risk playbooks is the pivotal final step toward success with your customers.

As mentioned above, once a process is a habit, it will be nearly effortless to continue.  This leaves your customer team with more time to be strategic and thoughtful with your customers, leading not only to better, happier relationships, but stronger guarantees of renewal and expansion.

If you’ve found this blog series to be helpful, consider attending one of our Risk Management Seminars in San Francisco, CA.   More information is available on our website or at meetup.com

For more information about Risk Management, or to request a customized Risk Management Workshop for your team, send us a note at contact@sandpoint.io.

To get updates when we publish additional blog posts, be sure to follow Sandpoint Consulting on LinkedIn.

 

Risk Management at Scale Part 4: building a framework

Risk Management 4.JPG

{If you missed the first three parts of Risk Management at Scale, read those first and then come back here}.

To briefly recap, we’ve determined what our qualitative vs. quantitative data points are, we’ve developed ways to collect all the data (through human input and telemetry data from your platform), and we’ve taken a stab at analyzing the resulting information.  Now for the fun part: building a framework to consistently interpret your data, provide you with cues to your customer’s risk and create playbooks for your team (and/or software) to execute.  In our final chapter, we’ll dig deep into automation, change management and scalability.

Before jumping in - we have a question for you:  during your data analysis, did you review the data associated with current customers as well as customers who have churned?  The latter analysis will uncover some of the most valuable insights you have.  Don’t overlook the nuggets of information, trends and even absence of data for customers who are no longer around.  And if you haven’t designed a Post-Mortem process for your team yet, be sure to tune in for our future blog post regarding this very valuable data collection activity.

During this chapter, we’ll provide you with a sample structure that we use to provide a weighted risk management framework around up to four quantitative data points as well as a maximum of three data points that are qualitative.  Remember, good, automated and consistently updated data is the only way to ensure that your risk management framework remains consistent and proactive.  If your data is hand-entered (especially quantitative data), doesn’t have a consistent time that it flows in (preferably daily) or relies too much on human interpretation (is entirely qualitative), your ability to manage real risk against your customers will be too slow and too inaccurate.  Strive for the ability to “set it and forget it” when dealing with quantitative data and aim for consistent habits (see Part 5) when dealing with qualitative data.

Let’s start with qualitative data.  This is the data collected from your team members about how they think the customer is doing based on their interaction (or lack thereof) with the customer.  What does your current qualitative health score look like?  The complexity of this score often depends on the background of the team member who implemented it.  Sometimes it’s a single data point: Customer Health.  Other times there are multiple facets: Risk, Engagement, Value.  We’ve even seen a customer who had broken down customer health into five different health indicators over three different levels of customer persona (buyer, champion, day-to-day).  Needless to say, this health score though comprehensive, was difficult to complete and even more harrowing to maintain.

We have also come across elaborate scoring systems such as 0-100 or other numeric scales (0-6 was used in one system, outlined below).  The problem with these systems has to do with how they are understood by the person entering the number (your customer team member) and how that differs from the understanding of the person interpreting the number.

  • Is this customer an 88, or are they a 72?  
  • What does it mean when you say one customer is a 91 and another customer is a 92?  Are they the same? Is there a real reason for the single point difference?  
  • If I have entered a customer as 33 because they are at extreme risk, but my colleague also entered their customer at 33 because they are just worried about their customer and only rate extreme risk as 10 or below, how do we prioritize these customers?  

The amount of granularity allows for too much confusion. Similarly, if you scale it back to 0-6, how do you define these numbers?  

  • 0 = no relationship
  • 1 = bad
  • 2 = stressed
  • 3 = fair
  • 4 = good
  • 5 = happy
  • 6 = excellent

Which leads to further questions:

  • What’s the difference between “stressed” and “bad” or “stressed” and “fair”?  
  • What qualifies a customer as “happy” versus “excellent” or “good”?  

Health scores with too much gradient or too much ambiguity in their definition means your quality of data will be poor and the variation between team member interpretation when scoring will be high.

Generally, we recommend keeping the score simple and easy to understand, like Red / Yellow / Green.  

  • Red means stop (high risk)
  • Yellow means caution (needs improvement)
  • Green means go (great health)

If you feel that more granularity is necessary, add a letter grade: A, B, C, D, F as used in most US grade school scoring systems.  

  • A = Excellent (everything is perfect)
  • B = Good (room for improvement, but generally very good)
  • C = Average / Warning (should improve, customer is not achieving full potential)
  • D = Risk (unhappy, low value)
  • F = Failure / Will Churn (red alert!).  

For easy visual interpretation, we recommend associating these letters with colors (A&B = green // C = yellow // D&F = red), since this makes the score easier to read at a glance and simplifies the concept back into RYG.  Notice, we didn’t assign five colors, only three.

We also recommend using three points of data for qualitative health: risk, value and engagement.  This way you can allow your team members a little bit of nuance with their health score.  If a customer is getting great value from your product but they constantly cancel calls with your customer team members, they may get a “green / A” score for value and a “red / D” score for engagement.  We also recommend providing the team with a health score matrix for qualitative health (see below) to define clearly how they should use each color / letter score for each data point.

Risk Management 4a.JPG

Now let’s add in a quantitative health score.  This score can be one of the most valuable indicators for your team regarding the overall success of the customer leveraging your product.  These metrics should be automatically captured by your product and fed into a spreadsheet or other CRM tool (like Gainsight or Salesforce) on a daily or weekly basis.  Capturing data like this less frequently will not allow your team to react quickly to negative scores.  

When building your score (start with just one final score), you will want to leverage multiple data points from your customer data analysis.  For this section, you’ll need to know the following things:  your top 3 or 4 data points that “move the needle”, the threshold above or below which the customer is “go/no go” and which ONE data point is your primary, which ONE data point is your secondary metric (the other two are tertiary).  

There should be one primary indicator of customer health and happiness.  Usually, this primary indicator is something BIG - tied directly into your value proposition for your product.  Usage, ROI, transactions completed, time to value, etc. are often the biggest indicators of health.  This primary metric will be the make-or-break score.  If a customer does not have this metric “in the green”, then they cannot achieve a score higher than yellow / C.  

There should also be one secondary indicator of customer health.  This secondary indicator is usually the one that moves the needle, but not quite as much as your primary.  This secondary metric must be green to allow a score of green / A.  

The one or two tertiary indicators are additional metrics that provide insight into the customer’s overall performance with the product.  Perhaps this is a metric tied to the customer’s time on site, user adoption, impressions or other, lower-value metric.  These help provide additional context if a customer is doing poorly on one of the more important health indicators.

Now it’s time to build your weighted framework.  The goal will be to have one quantitative health score that indicates to your customer team whether a customer’s performance is sufficient for success.  This will be an 11-point framework.  

  • Primary metric = 6 points
  • Secondary metric = 3 points
  • Tertiary metric #1 = 1 point
  • Tertiary metric #2 = 1 point

Total = 11 points

The key to this weighted framework is the health threshold:  what is the number or percentage above or below which your customer gets a “go/no go” on this metric.  Let’s assume your primary metric is ROI.  In order for a customer to get a “go” (aka: all 6 points), they have to be receiving 3x ROI from the platform.  Otherwise, they get 0 points for their primary metric.

  • A = 11 or 10 points
  • B = 9 or 8 points
  • C = 7, 6, or 5 points
  • D = 4 or 3 points
  • F = 2, 1, or 0 points

Green = 8+ points || Yellow = 7, 6, or 5 points || Red = 4-0 points

Now you have your quantitative health score - a single score based on up to 4 metrics from your customer’s direct interaction with your product - and you have your qualitative health score - three scores based on your customer’s direct interaction with your team.  These should be viewed independently to ensure context is provided, but you can also create a roll-up score that gives your customer a health score based on all of these results combined.  There are three ways to accomplish this, but only two that we recommend.

You can build a simple, unweighted roll-up score; the four scores each contribute 25% of the overall score.  This means that if your CSM has marked the customer as Red / F for risk (meaning they know the customer will not renew), but the other three scores are green, you will not see the customer as at significant risk.

--- Example 1 (Bad) ---

  • Risk = F
  • Value = A
  • Engagement = A
  • Quantitative / Product = A → Overall = B

This can blind you and your team to what is really going on with your customers.  We do not recommend unweighted scores.

You can create an “all-or-none” score, which is a great way to see risk: no matter how many greens, yellows or reds a customer has, their overall score is equal to the lowest score on their card.  So the same example customer would have a Red / F overall score, even if 3/4 of their card is green.  

--- Example 2 ---

  • Risk = F
  • Value = A
  • Engagement = A
  • Quantitative / Product = A → Overall = F

--- Example 3 ---

  • Risk = A
  • Value = B
  • Engagement = A
  • Quantitative / Product = A → Overall = B

This method ensures you will always see risk.  This method also means that there is no gradient - it is all or none.  If your customer team has some kind of exception in their data or they are working toward a goal of improving overall value, this method can create a LOT of red scores on your scorecard.

The final method is to weight your scores, perhaps providing more weight to Risk and Quantitative / Product scores than to Value and Engagement.  This method can be good for ensuring that Risk is surfaced, but creating an accurate weighting system can be difficult and data can also hide between the cracks.  If a customer is disengaged for a long period of time, for example, your team member may overlook this if the overall score seems good.

Sample Weighting

  • Risk = 35%
  • Value = 15%
  • Engagement = 15%
  • Quantitative / Product = 35%

--- Example 4 ---

  • Risk = A
  • Value = A
  • Engagement = D
  • Quantitative / Product = A → Overall = A

The final step for building your framework is to design a few, easy-to-follow playbooks for your team to run if a customer sees a low score.  Tracking your scores is only half the battle, the real value comes in having a plan for improvement and acting on it!  These playbooks can include simple tasks such as “Re-engage the sales team” if engagement drops, or more complex tasks, such as “audit customer value” if value drops.  Designing playbooks or plans for your team to execute if a score drops or stays below a certain threshold ensures that you will be able to stay focused on what matters and avoid over- or under-working a problem that has arisen.  It also empowers your team to do what you need them to do: mitigate risk.  

...

Now that you’ve built your framework and a few playbooks for action, we’ll learn how to automate and scale this process for your team AND steps for change management to ensure the right habits are built to maintain good quality health scores (part 5).  To get updates when we publish the additional parts of this series, be sure to follow Sandpoint Consulting on LinkedIn.

For more information about Risk Management, or to request a customized Risk Management Workshop for your team, send us a note at contact@sandpoint.io.

Risk Management at Scale Part 3: understanding what moves the needle

Risk Management 3.JPG

{If you missed the first two parts of Risk Management at Scale, read those first and then come back here}.

If you’ve made it this far, kudos on your drive towards reducing risk. Setting up quantitative and qualitative measurements (Part 1), and the process of collecting data (Part 2) are the prerequisites for the real work: analyzing the data. Well, there’s one last step between collecting and analyzing… and that’s physically getting the data.

Many early-stage companies have a limited suite of tools for data export, collection and analysis. You may also receive data in a basic format, like a spreadsheet, SQL export, or a CSV from your internal database. An engineer or data scientist may be able to give you a massively sliced export of the entire database, but it’s often beneficial to keep things simple. Use the software tools you’re comfortable with for this first pass, and keep in mind what additional data you would want next time.

Okay, data is exported and you’re ready to go. But wait, are you the right person to start digging into the analysis? Do you know how to read between the lines? Do you have a clear performance indicator to keep in mind while you are researching? It’s okay if you don’t know right now, but it’s not worth your time if you start exploring without a destination in mind. Take a moment to understand your larger business problems to find those KPIs. Imagine you found a treasure map but there’s no big X indicating where it’s buried. Not much help, is it?

Talk with your team and figure out the problems you’re trying to solve. You can ask everyone what they think is important and use that as a guide… OR go in uninfluenced and see for yourself what is important. Both options have their pros/cons so proceed with what you think makes sense. (Note: If you choose to go in with a hypothesis, a great place to pre-investigate would be your post-mortems. See the post-script for more information).

Let’s use a basic scenario: you have a CSV, are using Excel, and have some moderate skills with filtering / pivoting / vlookups. As a personal preference, we recommend keeping your raw data untouched and pristine in its own worksheet, and then copy-paste everything to a new tab to do your filtering. This ensures you can always return to the full set when you want to take a different approach.

Sort, filter, sort, filter. As you start to poke around, you’ll end up either finding a) the needle in the haystack → outliers, or b) a haystack → trends. Trends are the 80% you should focus on, tackling the bulk of your customers first. For the most part, a single outlier doesn’t tell you anything useful for creating new frameworks to prevent risk. It would be hard to build a system to prevent churn for the one customer that decided to pivot from selling iced tea to adopting blockchain technology (true story).

Look for the haystack, or rather the multiple small haystacks that can be indicators. (If we use Google Search as an example, the big haystack is ad revenue, and the small haystacks are time on site, active users, searches per day, etc). And make sure to document your filters / sorting, so you know what your dataset contains. If someone else with the same data set did the same analysis with the same filters, they should arrive at the same conclusion.

Not all of your discovered trends will be groundbreaking, but document them anyway. Each trend can be an individual clue to unlock a bigger theme within the data. And while you may find a few outliers that match up, don’t get distracted trying to build a case of outliers. Remember: focus on the 80%.

How do you know what’s important? A good starting off point is assume that everything is important! The discovery of “Our customers like it when we make them money” is obvious, so dig deeper. Do they want a 5x ROI, where they pay you $1 a month, and want to make at least $5? Is it more or less? Are there qualitative deliverables that don’t have direct revenue attached to it but can keep a 1.1x customer around for years? Finally, strive for finding trends and causes, not coincidences. It’s critical to understand whether a data point actually drives retention and customer happiness or if the impact is created by another factor.

Customer example: After a thorough analysis, we drew a line on what an acceptable ROI multiplier was for all client tiers. Then, we noticed a second data point: uneven contract vs service levels. Customers would sign contracts for a certain number of widgets per month, and even if they were able to exceed their monthly revenue goals with 10% fewer widgets, they still felt ‘cheated’ out of their full order. This feeling was mitigated going forward by having CS assessing and clearly documenting contract obligations and promises. It also became a helpful data point to cross-examine customer risk. From this, we were able to build the right framework (see Part 4!).

Finally, get a second and third set of eyes on your findings. For person #2, it’s best to have someone who knows the customers, the space, and are preferably close to your team. They are there to help you see the forest, as you’ve spent so much time in the trees. For person #3, go to another department. Their viewpoint is helpful as they can approach it from the product, marketing, or sales mindset. They may also have knowledge you weren’t privy to (i.e. an email blast went out on a certain day and that caused a huge lift in traffic and server costs) that can help color your findings in a new light.

Analysis is complete. Now, we’ll learn how to leverage your data to prevent risk (Part 4) and do it consistently at scale (Part 5). To get updates when we publish the additional parts of this series, be sure to follow Sandpoint Consulting on LinkedIn.

For more information about Risk Management, or to request a customized Risk Management Workshop for your team, send us a note at contact@sandpoint.io.

...

POST-SCRIPT

post·mor·tem  / pōs(t)-ˈmȯr-təm / noun: a process, usually performed at the conclusion of a project, to determine and analyze elements of the project that were successful or unsuccessful

If your relationship with a customer has concluded, chances are it was unsuccessful, since they decided to stop using your product. Your team should be collecting post-mortems for every churned customer. More information can be found in this blog post.

Risk Management at Scale Part 2: collecting the data

Risk Management 2.JPG

{If you missed Risk Management at Scale Part 1, read it first and then come back here}.

“I wonder how many people live here.” - US founders, 1789 → US Census
“Do you know how fast you were going?” - Highway police, present day → Radar gun
“Are customers using the product the way we thought they would?” - You → ???

For better or worse, humans are obsessed with measurements and metrics. It helps us to compare our progress over a time range with our peers or our competitors.

To start, let’s define an important term that should help turn those above questions marks on product use into actions:

te·lem·e·try / tə-ˈle-mə-trē / noun: the science and technology of automatic measurement of data from remote or inaccessible points and transmitted to receiving equipment for monitoring.

Your team may already be accomplishing this telemetry: user analytics from Mixpanel to measure traffic, Zendesk reporting to keep track of tickets created per customer, and even Google news alerts for important customer events like acquisitions or board member changes. All of this is great information for your customer success team to collect and have top of mind for their next call. This is quantitative data.

But speaking of the call, how do you measure your relationship? This is qualitative. Below are examples of phrases to listen for that can help a CSM understand their customers temperature.

  • Are they talking about events far into the future (i.e. beyond their upcoming renewal date)?
  • Has your day-to-day mentioned that their manager is leaving and a new person is taking over the department?
  • Have they cancelled the last couple calls with no explanation?

While there is no specific KPI connected to these remarks, it tells you something about their specific experience interacting with the product and CS team. Typically, we recommend customers use either school (A - F) or traffic light (green - yellow - red) grading to evaluate the relationship. Updating the score, even if it’s refreshing the same grade because the relationship is still great, helps to ensure that everyone in the organization knows this is accurate.

Where you store the relationship score really depends on your suite of tools and budget. There are customer relationship management tools, like Salesforce and Gainsight, that have this functionality built-in. Changes in score can also trigger specific playbooks based on a positive or negative movement. In addition, we’ve seen companies simply use a shared spreadsheet with only four columns: customer, assigned CSM, score, and date updated. It’s worth noting that this seemingly simple spreadsheet, when organized correctly, can become the foundational documentation for when you upgrade to a CRM tool, as it provides a clear template for your implementation.

Working in tandem with CSM scoring, your product and engineering teams should have proprietary systems and metrics built into the software you sell, which helps understand customer behavior. When is the last time they logged in? Is the day-to-day only using one feature? Is the customer VP clicking on the ‘Review Plans’ page, comparing their basic package to the premium version?

Collecting isn’t the hard part (hopefully). Once you have all this data, ask yourself if it’s accurate. And, if it’s not accurate, why? It’s beneficial to absolutely no one if you find reasons to trim numbers: “Oh, ignore these customers because they are too big / small / odd.” While this practice may result in the pretty up-and-to-the-right chart to show your board, you didn’t learn anything and don’t know where to focus your efforts.

It’s also important to build a culture of transparency within your customer team.  If your team members feel overly incentivized to fib on their customer’s happiness (giving green when it should be yellow or red), you’re up a creek with no paddle.  It is much better to know there’s an issue and have a plan for action than to be falsely confident and blindsided by bad news. A customer should rarely, or never, go directly from green to churn.

Let the story unfold from the full data set, and accept it as the current picture. Ignorance is not bliss. It’s better to know that the numbers are not great, rather than have “good” heavily edited data… and then be surprised by a spike in churn.

….

So, you’re collecting the data. Next we’ll get our hands dirty figuring out what all this data tells us about today, and how that changes tomorrow (in Part 3). From here, we’ll learn how to leverage your data to prevent risk (Part 4) and do it consistently at scale (Part 5).  To get updates when we publish the additional parts of this series, be sure to follow Sandpoint Consulting on LinkedIn.

For more information about Risk Management, or to request a customized Risk Management Workshop for your team, send us a note at contact@sandpoint.io.

Risk Management at Scale Part 1: quantitative vs. qualitative

Risk.png

No matter what you are selling to another person or company, whether it's a service, software or a cup of coffee, understanding risk factors and designing easy, scalable ways to stay ahead of churn are paramount to your company's success.  In order to begin this process, you must start thinking about your customers from their perspective and find ways to determine their "health" as it pertains to their relationship with you.

Most of our clients have some semblance of a qualitative health score on their customers when we begin working with them.  This usually comes in the form of either a traffic light (green - yellow - red) or letter grade (A-F) score given by a Customer Success Manager (CSM) to a customer based on the perceived relationship.  Sometimes this information is stored in a Customer Relationship Management (CRM) tool (like Gainsight or Salesforce), an internal customer wiki (in Confluence or Asana) or even in a spreadsheet (Excel or Google Sheets).  This data point may be updated regularly (daily or weekly), but more often is only updated monthly, quarterly or even "periodically", which translates to: when I remember to do it or if I'm told I have to...

We'll talk more about the structure of health scores in Part 4: building a framework, and we'll discuss update frequency in Part 5: change management and automation.  In this post, we'll describe the two major ways to evaluate customer health: quantitative and qualitative.  

Let's start with the easy one, or at least the one that seems easy on the surface: qualitative health.

qual·i·ta·tive  /ˈkwäləˌtādiv/ adjective: relating to, measuring, or measured by the quality of something rather than its quantity.  "a qualitative change in the undergraduate curriculum"

To put it simply, this score "bucket" relates to the quality of the customer's experience with your product or service.  Are they happy with it? Are they delighted by the experience?  How do they feel? 

Every interaction with your customer can give you a clue about their happiness with your company, your product or service and your team.  Accurately capturing this information after each interaction is vital to ensure that you are mitigating risk as it arises.  The tricky part is not capturing this information, per se, but doing so accurately.

The place that a lot of companies and team members slip up is through the assumption that "no news is good news".  Regarding customer health, this is ABSOLUTELY NOT the case.  No news is usually terrible, horrible, very bad news. 

In general, the less engaged your customer is with your company, the higher the risk of them leaving.  Why?  Because it's really easy to "switch vendors", but it's really hard to fire a person.  When your customer is engaged with your company through your customer team, they are interacting with people.  This interaction creates relationships and, done well, makes your customer very hesitant to "fire" you.  Without that relationship, you're just another tool that can be replaced with a newer, cooler, better, more interesting tool.

We'll dig deeper into this subject in Part 2: collecting the data.

Now let's get into the real meat and potatoes of customer health: quantitative scoring.

quan·ti·ta·tive /ˈkwän(t)əˌtādiv/ adjective:  relating to, measuring, or measured by the quantity of something rather than its quality."quantitative analysis"

This is where a lot of our customers freeze up.  How do you quantify your customer's health?  Isn't health something soft and "squishy", like relationships and happiness?  The answer is... well, sorta. As discussed above, your customer's happiness is a key part of their overall health, but there are tangible, measurable aspects of their interaction with your company and your product that directly impact their happiness.  How much value are they seeing out of your product or service?  How often do they use your product or engage with your service?  How much impact is your product or service creating in their business?

The great news is: all of these items are quantifiable, collectible and potentially meaningful indicators of the health of your customers.  Throughout this series, we'll describe not only how to collect the data and analyze it (Part 2 & Part 3), but how to leverage it to prevent risk (Part 4) and do it consistently at scale (Part 5).  To get updates when we publish the additional parts of this series, be sure to follow Sandpoint Consulting on LinkedIn.

For more information about Risk Management, or to request a customized Risk Management Workshop for your team, send us a note at contact@sandpoint.io