Partnership Announcement

SP_Plus_GS.png

A match made in Success.

Today we are excited to announce our strategic partnership with Gainsight, The Customer Success Company, one of the most effective and recognizable Customer Success solutions in the world. As you know, Gainsight provides consistent, quality Customer Success solutions to over 500 companies worldwide.

This partnership has the ability to strengthen our ties with the Customer Success community and provide yet another outlet for collaboration with Sandpoint Consulting. In addition to this, Sandpoint Consulting will begin offering targeted assistance to companies who wish to adopt Gainsight into their Customer Success toolkit.

Interested in hearing more about how Sandpoint and Gainsight can help your company thrive? Send us a note at contact@sandpoint.io to schedule a discovery call.

You can check out:

*Correction: this version is updated to state a corrected number of Gainsight customers

Why CS Should NOT Report to a Sales Executive

CS Not Report Sales.JPG

Sales and Customer Success have a very important, yet complex relationship that they must nurture and maintain.  Their alignment on company initiatives, customer acquisition, customer happiness and renewal are tantamount to your company’s success.  In today’s blog post, we’re going to discuss the reasons why the healthiest companies have autonomous Customer Success and Sales teams, functioning in a mutually beneficial, but independent fashion.

Less than a decade ago, many companies had a sales/revenue team that was responsible for all incoming, expanding and renewing monies on behalf of the company and a customer support team (often viewed as a cost center) that was responsible for addressing customer issues as they arose and/or providing services as ordered by the customer.  Occasionally, your most strategic accounts have had a team of people responsible to that specific client, but the majority of roles remained split into the two types described above: sales and service/support.

In today’s growing enterprise technology universe, it is imperative to have a third member of the team: your Customer Success Manager.  This team member is responsible to the customer’s success first.  That means that your Customer Success team (a revenue center) knows the contracts that the customer has signed, the expectations of the customer, the business outcomes they are trying to reach. From this, the CSM forms a strategically-aligned, collaborative relationship with the customer, free from the pressure of making a sale.  This team member manages the workings of your company on behalf of the customer and communicates the customer’s needs, wants and positive feedback back through the company.

The value that this provides to your customer is twofold: one, they have a “neutral” party with whom to discuss and plan their business goals, requirements and strategic pathways; two, they have a partner whose primary goal is to work on their behalf to ensure their success and reduce their time-to-value with your product.  By reducing time-to-value and keeping a pulse on the business goals and outcomes, the Customer Success team ensures that your customer’s renewal is a non-event and can even prime them for conversations about expansion.

A customer with a strategically-aligned and engaged Customer Success Manager will more quickly see value in your product, will feel like their needs and wants are heard (even if they cannot be implemented), and will form a deeper relationship with a key member of your team who can directly help them achieve their goals.  This customer will be more likely to promote your product to their network and offers more marketing potential with case studies and testimonials.

The big question is: why shouldn’t CS report to a Sales executive?

The simplest answer is that they don’t have the same micro goals.  Yes, their macro goal (increase and maintain revenue at your company as much as possible) is the same, but below that things begin to diverge. There should always be a healthy amount of friction between the two organizations in order to ensure that you are retaining as much revenue as possible, not just pouring as much in the top of the funnel as you can.

Your sales leadership should be focused on increasing the revenue coming in the door, while reducing the cost to acquire a customer (CAC).  Your CS leadership should be focused on increasing the net revenue from existing customers, maintaining the total number of customers, and reducing the cost to expand and renew a customer (CEC, CRC).  And yes, in case you were curious, we are firmly in the camp of “CS owns renewal numbers”.  It is our opinion (and we are not alone) that in order to build a customer team that is revenue generating, you must put the onus (and the incentives) on them to renew.  We tend to shy away from having CSMs directly responsible for presenting the contract to renewing the customer and opt instead for a renewal manager who works on the CS team and reports to CS leadership.  We’ll talk more about CS incentives and team structures in a separate blog post.

Another factor to consider is expertise. Your VP of Sales or CRO (Chief Revenue Officer) may have experience leading customer teams, but they are head of sales for a very clear reason: that is where their expertise lies.  Hiring an equivalent head of CS means that you benefit from their knowledge.  They are also one of the only executives whose primary perspective is that of your customers.  They will consistently bring the voice of the customer into the room, ensuring that your company makes high-level strategic decisions based on direct feedback from those companies or people who are currently paying for your product.  

As Dan Steinman, former Chief Customer Officer of Gainsight, states, “Alignment with Sales is intuitively easier if Customer Success reports to the VP of Sales or the CRO but some of the inherent challenges may be easier to overcome in a peer-level organization structure.” (source) In order for your company to grow in a healthy way, there should always be pressure to do the best thing for both the company and the customer.  When that pressure gets equal representation in a meeting of executives, your company looks more closely at decisions and prioritizes well for growth over time.  

To use an example from Dan’s article (source above), let’s say that you have a prospect that requires a new feature to be built in order for them to sign.  At the same time, you have multiple current customers demanding better performance and threatening to walk if they don’t get it.  As much as your sales leader will try to weigh both concerns equally, she may be inclined (or unevenly incentivized) to close the deal, essentially robbing Peter (current customer) to pay Paul (prospect).  With a CS leader in the room, the conversation happens in the open with both sides arguing for what they need, stating their case as peers.  The company makes better, more informed decisions and it is clear to all stakeholders why the decision was made.  

Possibly more important than any other point, having a CS leader in the room with your other executives helps morale on the customer team.  Having good morale is absolutely imperative, as these team members interact every minute of every day with your valuable assets - the customers.  Customers can tell when morale is low, or when a CSM is feeling discouraged.  Customers get nervous if your team seems beaten down, negative or pessimistic.  Having an equal voice in senior leadership meetings means that the CS team feels heard and they have a trusted leader who can provide them with context for why the company decided to prioritize the new feature over better performance.  Your CS leader can ensure that your CS team feels like they are an equal part of the company and not like they are the under-appreciated janitor with no voice.

Be sure to follow Sandpoint Consulting on LinkedIn to be alerted for future posts. For more information on anything customer success, send us a note at contact@sandpoint.io.

Post-Mortems: Learn From Your Past

Post Mortems.JPG

In case this is a new term for you, let’s define it right away:

post·mor·tem / (pōs(t)-ˈmȯr-təm / noun: a process, usually performed at the conclusion of a project, to determine and analyze elements of the project that were successful or unsuccessful

At its most simple, this is a documented discussion that you should undertake when a customer has ended their contract. For the most part, if you are in enterprise SaaS, a contract ending is an unsuccessful ending, as the customer no longer wishes to use your product.

You want to start this documentation as soon as you can so that data and events are fresh in your head. It may seem pessimistic to start writing it as soon as a client gives notice to cancel, but preparation is never a bad thing. Start writing down recent events, make note of their usage numbers, review their contract. We’ll expand into what to document in a moment.

The goal is to understand how this happened. How did the customer decide to end their contract? And from this, there are two follow-up paths:

  1. How can we retain this customer?
  2. How can we retain the NEXT customer?

If you have the good fortune and enough time to try and keep this customer, do everything you can. However, if you’ve exhausted all avenues, take this lump as a lesson on how to make sure no other customers churn for the same reason. Don’t make the same mistake again.

Okay, so let’s get into it. The owner of this document / process should be the assigned CSM, as they have the most context on the post-sale relationship. They can start the document on their own, followed-up by a meeting with all relevant internal parties. Use the below as a starting template, and make sure to customize based on your business.

I. Background

Company

  • Industry
  • Company size (ARR and/or employee count)
  • Contacts and roles

Contract

  • Account Executive
  • What got the customer to sign / Any expectations set pre-sale
  • Start Date
  • Renewal Date
  • Contract levels

Implementation

  • Implementation Engineer
  • Launch Date
  • Time to Launch
  • Roadblocks encountered

Success

  • Customer Success Manager
  • Any other members brought on to service account
  • Metrics and numbers
    • Total / YoY / QoQ / MoM / Any other way to slice

II. Engagement

  • Summary: 100 words of how it was sold, onboarded, managed, difficulties, and the path to ending
  • Early Signs: Insert all events that may have been a cause for concern
  • Actions Taken: Insert all events that were undertaken to try to course-correct
  • Last Straw: What was the single event that caused them to finally reach out and cancel or decide not to renew

III. Learnings

  • What did we miss early on
  • What were the root causes
  • What could we have done better
  • What do we look for NOW with our current customers that may lead us to early detection
  • What new systems can we adopt / upgrade

The CSM should fill out the first two sections, and then schedule a 30 minute meeting with the account executive, head of CS, a support team lead, and potentially a product manager, to discuss. It’s important that this meeting does not become a finger-pointing session where everyone tries to blame someone else for the lost customer. The goal is to discover learnings to prevent future customers from churning, so the name of the game is change management, a topic I know is near and dear to Emily’s heart. Expect an upcoming blog post soon!

Be sure to follow Sandpoint Consulting on LinkedIn to be alerted for future posts. For more information on anything customer success, send us a note at contact@sandpoint.io.

Risk Management at Scale Part 5: change management and automation

Risk Management 5.JPG

{If you missed the first four chapters of Risk Management at Scale, read those first and then come back here}.

You did it!  You’ve determined what your qualitative vs. quantitative data points are, developed ways to collect all the data, analyzed the resulting information, and built a framework to consistently provide you with cues to your customer’s risk. You’re all done, right?  The answer is: almost. Creating the process, no matter how simple, is just the beginning. Now you have to automate as much of the intake as possible and make sure your team follows the full process consistently. This requires habit creation through change management.

Hopefully, you’ve done yourself a big favor by automating as much of the data collection, analysis and interpretation during the framework phase.  This will help reduce the weight of change on your team. Any of the data your platform can produce and deliver into your CRM tool (like Gainsight or Salesforce) will also ensure that there’s no delay when key performance metrics drop for your customers.  Additional steps you can take to automate your data collection and interpretation are as follows:

If a computer can do it, make sure a computer is doing it

Yes, people are great at doing things that are complex and can “kill two birds with one stone” by interpreting data as it is entered into the system.  BUT humans are also fairly unreliable, through no fault of their own. The best part about a computer is that it will do exactly what you tell it to exactly when you tell it to.  (The only time this is not the case is when the computer is being told more than one thing and the things it’s being told to do conflict, or something changed what the computer is being asked to do.  Really.)

Break it down to speed it up

If you can break your data process into steps that a computer can do and steps that a human has to do, you’re better off.  And if you feel like a human has to do all of it, you haven’t broken the steps down far enough.  How is the data getting out of the platform you’re extracting it from? Is a person logging in and downloading it?  Can you set up a job to deliver a file to a folder or inbox on a regular basis instead? Do that. How is the data getting into the platform you use to analyze it?  Again, if you can run a job with a computer to ingest the data automatically, do it. Don’t know how? I guarantee it is worth your time, money and collaborative efforts to enlist someone from your engineering team, data team or even an outside specialist to set this up for you.  The time saved and the reliability you buy are well worth the expense.

Use the tools you have, but use them better

Only using Excel and don’t have a CRM?  Using Zendesk and Salesforce, but they don’t talk to each other?  Have Gainsight for scorecards, but haven’t set up automated scores?  Don’t worry - use what you have, but take the time to automate and optimize it.  Build a macro in Excel to pull in information from a folder on your shared drive.  Use the API connections that your software has (like Zendesk and Salesforce) to automate the process of updating data back and forth. Build a field that Gainsight can leverage to track your scores automatically. Don’t know how to do these things? Google is your friend.  It’s very likely that all the software you’re using has some kind of help documentation on the world wide web. Carve out a few hours, roll up your search sleeves, and follow the step-by-step instructions on how to accomplish what you need to do. Trust me, it’s worth your time now to save time (and ensure accuracy) later.

Reduce the amount of work your team has to do

Simplify.  Building complex models and multiple data sets is great, but can you get the same or similar results with less?  To be clear - the automated stuff can be complex as long as it is valuable and truly automated. When it comes to the components that your team has to enter, interpret or act upon, simplicity is best.  Take a moment to look at the aspects of your process that people are going to have to update or interact with - are all of these points necessary? Are you sure? If you can simplify, even just for now, you’ll see greater success during change management.

Great!  You’ve automated all the tasks that can be automated.  You’ve connected your systems to talk to each other and you’ve set up jobs for your product to run, ensuring data is delivered in a consistent, timely manner.  Now take a look at all of the people-run parts of the process. Document the steps; be clear and concise. Use screenshots. Draw boxes and arrows to indicate exactly what you want done.  Now put this documentation in a single, easy-to-find place and use it to train the team on the process.

Training the team is just the first step in the change management process.  But telling your team how to do something and expecting them to do it is not enough.  They will continue to do things the way they have been unless you move them to do it the new way.  This means consistency, clarity, and (nearly) constant communication from you.  

People hate change and the way things are, in that order.

That means no matter how much better the process you’ve devised is, it is far easier for humans to follow their old habits.  Habits are our lazy brains’ way of being efficient. Once a habit loop is formed, the only way to break it is to form a new habit on the existing trigger.  

So how do you change a team of people to adopt your new process?  You follow up with them. A lot.

Here are the steps we take to change manage teams (see our future post on Change Management for details):

  1. Acknowledge that change is hard

  2. Explain why the change needs to be done and what the intention is

  3. Acknowledge that the process may not be perfect

  4. Request that the whole team try to only use the new process and not use old methods for tracking

  5. Ask for feedback, then act on it

  6. Create visibility and give kudos for successful change

  7. Provide dedicated time for team members to walk through the process again

  8. Bring up the new process and request feedback during team meetings

  9. If a team member is struggling to adopt the new process, sit with them on a regular basis to help them get on board

  10. Continue this for at least three months, or until it is clear that all of your team members are comfortably following the new process

The worst thing you can do is roll out a new process only to find out that no one is following it. Especially when it comes to risk management and health scores, you MUST have consistent and reliable data on your customers.  Getting your systems in order to automate as much of the process as possible is a necessary step in the journey to customer retention and risk management.  Ensuring that your team have formed excellent habits around maintaining health scores and following risk playbooks is the pivotal final step toward success with your customers.

As mentioned above, once a process is a habit, it will be nearly effortless to continue.  This leaves your customer team with more time to be strategic and thoughtful with your customers, leading not only to better, happier relationships, but stronger guarantees of renewal and expansion.

If you’ve found this blog series to be helpful, consider attending one of our Risk Management Seminars in San Francisco, CA.   More information is available on our website or at meetup.com

For more information about Risk Management, or to request a customized Risk Management Workshop for your team, send us a note at contact@sandpoint.io.

To get updates when we publish additional blog posts, be sure to follow Sandpoint Consulting on LinkedIn.

 

Risk Management at Scale Part 4: building a framework

Risk Management 4.JPG

{If you missed the first three parts of Risk Management at Scale, read those first and then come back here}.

To briefly recap, we’ve determined what our qualitative vs. quantitative data points are, we’ve developed ways to collect all the data (through human input and telemetry data from your platform), and we’ve taken a stab at analyzing the resulting information.  Now for the fun part: building a framework to consistently interpret your data, provide you with cues to your customer’s risk and create playbooks for your team (and/or software) to execute.  In our final chapter, we’ll dig deep into automation, change management and scalability.

Before jumping in - we have a question for you:  during your data analysis, did you review the data associated with current customers as well as customers who have churned?  The latter analysis will uncover some of the most valuable insights you have.  Don’t overlook the nuggets of information, trends and even absence of data for customers who are no longer around.  And if you haven’t designed a Post-Mortem process for your team yet, be sure to tune in for our future blog post regarding this very valuable data collection activity.

During this chapter, we’ll provide you with a sample structure that we use to provide a weighted risk management framework around up to four quantitative data points as well as a maximum of three data points that are qualitative.  Remember, good, automated and consistently updated data is the only way to ensure that your risk management framework remains consistent and proactive.  If your data is hand-entered (especially quantitative data), doesn’t have a consistent time that it flows in (preferably daily) or relies too much on human interpretation (is entirely qualitative), your ability to manage real risk against your customers will be too slow and too inaccurate.  Strive for the ability to “set it and forget it” when dealing with quantitative data and aim for consistent habits (see Part 5) when dealing with qualitative data.

Let’s start with qualitative data.  This is the data collected from your team members about how they think the customer is doing based on their interaction (or lack thereof) with the customer.  What does your current qualitative health score look like?  The complexity of this score often depends on the background of the team member who implemented it.  Sometimes it’s a single data point: Customer Health.  Other times there are multiple facets: Risk, Engagement, Value.  We’ve even seen a customer who had broken down customer health into five different health indicators over three different levels of customer persona (buyer, champion, day-to-day).  Needless to say, this health score though comprehensive, was difficult to complete and even more harrowing to maintain.

We have also come across elaborate scoring systems such as 0-100 or other numeric scales (0-6 was used in one system, outlined below).  The problem with these systems has to do with how they are understood by the person entering the number (your customer team member) and how that differs from the understanding of the person interpreting the number.

  • Is this customer an 88, or are they a 72?  
  • What does it mean when you say one customer is a 91 and another customer is a 92?  Are they the same? Is there a real reason for the single point difference?  
  • If I have entered a customer as 33 because they are at extreme risk, but my colleague also entered their customer at 33 because they are just worried about their customer and only rate extreme risk as 10 or below, how do we prioritize these customers?  

The amount of granularity allows for too much confusion. Similarly, if you scale it back to 0-6, how do you define these numbers?  

  • 0 = no relationship
  • 1 = bad
  • 2 = stressed
  • 3 = fair
  • 4 = good
  • 5 = happy
  • 6 = excellent

Which leads to further questions:

  • What’s the difference between “stressed” and “bad” or “stressed” and “fair”?  
  • What qualifies a customer as “happy” versus “excellent” or “good”?  

Health scores with too much gradient or too much ambiguity in their definition means your quality of data will be poor and the variation between team member interpretation when scoring will be high.

Generally, we recommend keeping the score simple and easy to understand, like Red / Yellow / Green.  

  • Red means stop (high risk)
  • Yellow means caution (needs improvement)
  • Green means go (great health)

If you feel that more granularity is necessary, add a letter grade: A, B, C, D, F as used in most US grade school scoring systems.  

  • A = Excellent (everything is perfect)
  • B = Good (room for improvement, but generally very good)
  • C = Average / Warning (should improve, customer is not achieving full potential)
  • D = Risk (unhappy, low value)
  • F = Failure / Will Churn (red alert!).  

For easy visual interpretation, we recommend associating these letters with colors (A&B = green // C = yellow // D&F = red), since this makes the score easier to read at a glance and simplifies the concept back into RYG.  Notice, we didn’t assign five colors, only three.

We also recommend using three points of data for qualitative health: risk, value and engagement.  This way you can allow your team members a little bit of nuance with their health score.  If a customer is getting great value from your product but they constantly cancel calls with your customer team members, they may get a “green / A” score for value and a “red / D” score for engagement.  We also recommend providing the team with a health score matrix for qualitative health (see below) to define clearly how they should use each color / letter score for each data point.

Risk Management 4a.JPG

Now let’s add in a quantitative health score.  This score can be one of the most valuable indicators for your team regarding the overall success of the customer leveraging your product.  These metrics should be automatically captured by your product and fed into a spreadsheet or other CRM tool (like Gainsight or Salesforce) on a daily or weekly basis.  Capturing data like this less frequently will not allow your team to react quickly to negative scores.  

When building your score (start with just one final score), you will want to leverage multiple data points from your customer data analysis.  For this section, you’ll need to know the following things:  your top 3 or 4 data points that “move the needle”, the threshold above or below which the customer is “go/no go” and which ONE data point is your primary, which ONE data point is your secondary metric (the other two are tertiary).  

There should be one primary indicator of customer health and happiness.  Usually, this primary indicator is something BIG - tied directly into your value proposition for your product.  Usage, ROI, transactions completed, time to value, etc. are often the biggest indicators of health.  This primary metric will be the make-or-break score.  If a customer does not have this metric “in the green”, then they cannot achieve a score higher than yellow / C.  

There should also be one secondary indicator of customer health.  This secondary indicator is usually the one that moves the needle, but not quite as much as your primary.  This secondary metric must be green to allow a score of green / A.  

The one or two tertiary indicators are additional metrics that provide insight into the customer’s overall performance with the product.  Perhaps this is a metric tied to the customer’s time on site, user adoption, impressions or other, lower-value metric.  These help provide additional context if a customer is doing poorly on one of the more important health indicators.

Now it’s time to build your weighted framework.  The goal will be to have one quantitative health score that indicates to your customer team whether a customer’s performance is sufficient for success.  This will be an 11-point framework.  

  • Primary metric = 6 points
  • Secondary metric = 3 points
  • Tertiary metric #1 = 1 point
  • Tertiary metric #2 = 1 point

Total = 11 points

The key to this weighted framework is the health threshold:  what is the number or percentage above or below which your customer gets a “go/no go” on this metric.  Let’s assume your primary metric is ROI.  In order for a customer to get a “go” (aka: all 6 points), they have to be receiving 3x ROI from the platform.  Otherwise, they get 0 points for their primary metric.

  • A = 11 or 10 points
  • B = 9 or 8 points
  • C = 7, 6, or 5 points
  • D = 4 or 3 points
  • F = 2, 1, or 0 points

Green = 8+ points || Yellow = 7, 6, or 5 points || Red = 4-0 points

Now you have your quantitative health score - a single score based on up to 4 metrics from your customer’s direct interaction with your product - and you have your qualitative health score - three scores based on your customer’s direct interaction with your team.  These should be viewed independently to ensure context is provided, but you can also create a roll-up score that gives your customer a health score based on all of these results combined.  There are three ways to accomplish this, but only two that we recommend.

You can build a simple, unweighted roll-up score; the four scores each contribute 25% of the overall score.  This means that if your CSM has marked the customer as Red / F for risk (meaning they know the customer will not renew), but the other three scores are green, you will not see the customer as at significant risk.

--- Example 1 (Bad) ---

  • Risk = F
  • Value = A
  • Engagement = A
  • Quantitative / Product = A → Overall = B

This can blind you and your team to what is really going on with your customers.  We do not recommend unweighted scores.

You can create an “all-or-none” score, which is a great way to see risk: no matter how many greens, yellows or reds a customer has, their overall score is equal to the lowest score on their card.  So the same example customer would have a Red / F overall score, even if 3/4 of their card is green.  

--- Example 2 ---

  • Risk = F
  • Value = A
  • Engagement = A
  • Quantitative / Product = A → Overall = F

--- Example 3 ---

  • Risk = A
  • Value = B
  • Engagement = A
  • Quantitative / Product = A → Overall = B

This method ensures you will always see risk.  This method also means that there is no gradient - it is all or none.  If your customer team has some kind of exception in their data or they are working toward a goal of improving overall value, this method can create a LOT of red scores on your scorecard.

The final method is to weight your scores, perhaps providing more weight to Risk and Quantitative / Product scores than to Value and Engagement.  This method can be good for ensuring that Risk is surfaced, but creating an accurate weighting system can be difficult and data can also hide between the cracks.  If a customer is disengaged for a long period of time, for example, your team member may overlook this if the overall score seems good.

Sample Weighting

  • Risk = 35%
  • Value = 15%
  • Engagement = 15%
  • Quantitative / Product = 35%

--- Example 4 ---

  • Risk = A
  • Value = A
  • Engagement = D
  • Quantitative / Product = A → Overall = A

The final step for building your framework is to design a few, easy-to-follow playbooks for your team to run if a customer sees a low score.  Tracking your scores is only half the battle, the real value comes in having a plan for improvement and acting on it!  These playbooks can include simple tasks such as “Re-engage the sales team” if engagement drops, or more complex tasks, such as “audit customer value” if value drops.  Designing playbooks or plans for your team to execute if a score drops or stays below a certain threshold ensures that you will be able to stay focused on what matters and avoid over- or under-working a problem that has arisen.  It also empowers your team to do what you need them to do: mitigate risk.  

...

Now that you’ve built your framework and a few playbooks for action, we’ll learn how to automate and scale this process for your team AND steps for change management to ensure the right habits are built to maintain good quality health scores (part 5).  To get updates when we publish the additional parts of this series, be sure to follow Sandpoint Consulting on LinkedIn.

For more information about Risk Management, or to request a customized Risk Management Workshop for your team, send us a note at contact@sandpoint.io.