Century Interactive Combines Call Tracking with ‘Humanatics’

I spoke recently with Century Interactive (CI), which bills itself as a marketing analytics company rather than a “call tracking” company. CI provides call-based analytics for marketers across media categories but also helps enterprises and larger SMBs with operational efficiencies.

CI doesn’t do PPCall.

I spoke with CI’s COO Patrick Elverum last week. He said there were two primary things that differentiated CI from other “commodity” providers of call tracking in the market. The first was what he called “Humanatics.”

Marchex and Yext’s Felix (now part of CityGrid) use automated speech-to-text call transcription and algorithms to evaluate the quality of calls. Elverum disputed whether that worked at all given the generally poor quality of call transcription (see Google Voice). Instead CI crowd sources call quality evaluation. (Marchex also uses IVR to capture caller intent as a quality control mechanism.)

Elverum said that call recordings are listened to by trained contractors who provide a single, subjective evaluation of a call: good lead or not, conversion or not. Each client or industry vertical may have a different conversion event or magic language that the call evaluator is listening for.

Once a call is scored it’s often checked by someone else and verified. In this way CI can tell which calls “worked” or delivered a real prospect or customer. Here’s Elverum’s more precise description of the process:

The system is built with several checks and balances, one of which is sending calls that look suspicious (short review time, characteristics or x but tagged as y, etc . . ) to a secondary reviewer.  We also take advantage of multiple staged questions to allow reviewers to correct a call that might have been tagged incorrectly by a previous reviewer, as well as, dedicated auditors.  Not all calls are sent on to a secondary reviewer, but all reviewers and categories are continuously monitored for accuracy across multiple fronts.

Once the call conversion scoring is accomplished that information can be fed into Google’s conversion optimizer or to platforms such as Kenshoo, which offers “call conversion optimization” for search campaigns. Asked about lag time or latency, from scoring to later submission to a Kenshoo or Google analytics, Elverum told me that CI’s “stated goal is to maintain lag times under three hours. We are generally under one hour.” Pretty good.

As I described this system to an industry insider, he asked “How scalable is it?” Elverum told me that CI currently has “a wait list that is twice the size of our current review population.”

Elverum provided a vertical case study to illustrate how, beyond marketing, insights gained from call analytics can help a business on the operations and customer service side.

CI analyzed just under 2 million calls to 342 auto dealers, as well as roughly 6.1 million outbound calls. What CI discovered is the following:

  • 39% of inbound consumer sales calls failed to reach the designated or requested dealer employee. Biggest problem: callers asking for a specific employee and not being reconnected to system if employee not available
  • 24% of connected inbound calls were inventory or lead calls
  • On inventory calls, dealer employee asked consumer to come in 22% of the time. Consumer-buyers accepted invitation for a specific date 36% of the time. They declined 12% of the time
  • When a dealer appointment request was made, the consumer-caller “softly” accepted without a specific day 52% of the time
  • For these “soft” appointment acceptances, the dealership made an outbound confirmation call 17% of the time
  • However 86% of outbound dealer calls never reach the intended consumer target

What the above case study indicates is how call analytics and insights can help businesses better understand their customer interactions and where service may be weak or sales are potentially being lost.

In terms of call evaluation and lead scoring for marketing purposes, CI is the only call-tracking firm I’m aware of using humans to evaluate calls. In the industry there’s considerable skepticism around this approach as a “scalable” solution, though Elverum argues that it is.

Elverum and I also spoke about how call tracking helps with “attribution” and figuring out which media channels are “working.” But we discussed scenarios in which call tracking can also obscure the influence of certain media.

For example, conventional media ads (TV, newspaper, radio, print magazines) may generate awareness that later leads to research and buying inquiries. At the awareness stage of the “funnel” consumers may not pick up the phone but they may be stimulated to do further research that ultimately leads to a purchase later on. Using call tracking to measure direct response of all media in such a situation might lead, ironically, to an inaccurate sense of attribution — a version of the “last click” problem online.

For SMBs, however, who may only be concerned about direct response/performance and might care less about “branding” or “awareness,” call tracking can be a highly valuable tool, notwithstanding some of the SEO objections to its use. A former Marchex employee told me recently that they never experienced a problem with “data (NAP) pollution.”

For mobile advertising call tracking is an absolute must. (See my discussion of xAd-LaQuinta case study.)

I’m curious what you think about this use of crowd sourcing to evaluate call quality. Do you like the idea or do you think it’s flawed?

You can follow any responses to this entry through the comments feed.

6 Responses to “Century Interactive Combines Call Tracking with ‘Humanatics’”

  1. Mike Wilson says at

    Greg –

    I think left by itself, the model is potentially unscalable. In order to scale solely through applied humantics, there is a question of the skill of the analyzer – with a possible premium on reducing skill to build a opoulation of analyzers at a lower cost. However, a combination of automation / analytics tools AND humantics does start to become scalable. Algorithms can benefit from applied learning / findings of human analyzers and thus become stronger, more reliable tracking tools – which then can become better at supporting the humantics efforts as well. Essentially this becomes cyclically beneficial to both automated and human-based analyzers. The key is honing this balance as you scale. It does ring a little of Mechanical Turks applied to call tracking… But this is a model worth watching for the points you raise above.

  2. Greg Sterling says at

    As far as I know CI is the only firm trying this approach. 

  3. Patrick Elverum says at

    Mike – 

    Great comments.  The Turk was definitely one of the creative inspirations for Humanatic.  The name Humanatic is actually points to the machine learning aspect that you suggest here.  One way to think of Humanatic is as a very high level mechanical filter combined with a very focused human element the purpose of which is to identify the calls that matter most.  The more data fed to Humanatic the better the machine gets at predicting good calls based on the numerous call characteristics that we measure.  That predictive algorithm allows us to be more efficient with our human resourcing and scaling becomes not necessarily an obstacle, but an opportunity to improve our predictions.

  4. Steve Minton says at

    I’ve been a client of Century Interactive for about a year now, and I have been using the Humanatic service to help me sort through calls to my Pay Per Call Clients. The guys at Century Interactive have been great. They have been consistently tweaking their system to improve the accuracy and reporting.

    Since a PPCall business model relies on identifying good (chargeable) calls, it is critical to have a high degree of accuracy so that your client base does not loose trust. My PPCall clients are small service businesses (Handymen, Tree Guys, etc.) Their call volumes have to be low because most of them just can’t handle more than 100 calls a month (some less than 50). With such a small volume, any false judgements on call quality are quickly identified. If too many mistakes are made, the trust in the relationship quickly deteriorates. Accuracy is key.

    The other aspect to consider is that the rules governing what constitutes a “good call” are not the same for Fred the tree guy as they are for Joe the Handyman. The tree guy may not want to get involved with stump grinding, but he still may get calls for that service because a percentage of general public will assume that all tree guys are happy to grind stumps (even though it is not listed as a service). The same kind of assumptions can arise about service area as well. Joe the Handyman will have his service area clearly identified, but he will often get calls outside of his area.

    These two types of scenarios require critical thinking to identify the calls properly. Unfortunately, identifying the call correctly is not always so black and white. Getting detailed instructions to the crowd sourced workers is the key, but, at the same time, the instructions need to be simple enough that they can be quickly & easily understood.

    The scalability of the crowd sourcing lies in the providing the correct instructions to the listeners. Once the accuracy is high enough ( over 95% ), then the system can go on autopilot. Thanks again to the guys at Century Interactive. They’ve been there for me all along and I know that they are working hard to crack this nut.

  5. Carlton says at

    CI is not the only company to “crowd source” call evaluation. ContactPoint and Call Source have been human scoring calls for a decade. The tradeoff with this approach is scalability and consistency. Humans are subjective no matter how you slice it. Also the skepticism about whether machines can do this kind of work is ill-founded. Speech analytics is ready for prime time and will replace this “Humanatic” approach within the next year so stay tuned.

  6. Patrick Elverum says at

    Carlton – 

    I appreciate the counter points and I will tell you that CI is always looking for the engineered solution first.  I won’t argue the advances made in speech analytics.  Siri is probably the most often cited example of this, and I think that she is fantastic and can be a lot of fun to play with.  But as someone who has spent a lot of money trying to go the infinitely scalable “machine only” approach through various voice transcription technologies only to find that the end result was not acceptable to our clients, I would suggest that speech analytics in not quite ready for prime time just yet.  I use Google Voice transcription, as well as, the ATT version and they both confirm my premise daily.  The weakness of machine transcription is particularly evident in two way conversations (Remember Siri and all of the voice recognition prompts are one way).  When you are validating your lead generation efforts or even charging the end client by the call accuracy is paramount, and the machines just can’t provide it yet.  

    The subjectivity of our human reviewers that you point out is actually a strength in the product.  Case in point, even if voice transcription was perfect (it isn’t), you still have to mine that transcription for “valuable” keywords or phrases.  Unfortunately the statement “schedule an appointment” doesn’t exist in a large percentage of quality calls.  Our humans can use a subjective determination of the caller’s intent when they say, “”i’ll swing by after work,” “You bet,” or “howabout we make it two?”  An trained English speaking human can quickly determine the value of that call and tag it accordingly.  When you are looking to separate the signal from the noise, this is vital. 

Leave a Reply