Adam Parks (00:00)
Hello everybody, Adam Parks here with another receivables webinar. Very excited here today to have some great guests to talk with us. I've got Prince from TEC and Mike coming to us from Enova. Now these guys are in the thick of building data waterfalls, unlocking value from the data sets that are available. So gentlemen, thank you so much for coming on today and sharing your insights. I really do appreciate you participating today.
Prince Matharu (00:29)
Thank you for having us.
Mike Walter (00:30)
Thank you, no problem.
Adam Parks (00:31)
Absolutely, for anyone who has not been as lucky as me, Prince may be starting with you. Could you tell everyone a little about yourself and how you got to the seat that you're in today?
Prince Matharu (00:40)
Alright, well I'll answer the second question first. So how I arrived in collections was I'm a travel addict, so was backpacking in Australia, taking a little bit of space between finishing my master's degree and while I was there a friend of mine had an interview lined up with GE Money and she couldn't make it so I accidentally filled in. I was traveling so I thought, okay, this would be a little bit of extra cash on the side and it was a job on the collections floor doing five to nine shifts and then a Saturday shift. So that was my entry point. I had no idea about collections, never done that before, but somehow I found it exciting. So I did that job for about eight months and then transferred my credits in finishing my MBA in Australia. And simultaneously, I was lucky to score a leadership and a mentorship within GE Money. So I was put on an accelerated path and within 18 months I was senior manager for all the debt portfolios that GE Money was either collecting on in-house or selling at the minute and then launched into my career, ended up consulting for the state government in Australia before moving to the States. And then for the past 10 years I've been with TEC and had the opportunity to consult for Most of the major credit bureaus work with excellent clients like Mike and Enova. So yeah, it's been a fun ride.
Adam Parks (02:09)
Sure sounds like a fun ride. mean, I feel like we're going to have to talk about that some more, Prince, because so many interesting things said in just a few minutes. Mike, so how about you? How did you get to the seat that you're in today?
Mike Walter (02:21)
I'm not sure my answer is going to be as sexy as Prince's about being international. But how I got in collections, was in college in the Midwest and I was a junior getting a degree in finance. I'm like, I better get a job, get some work experience before I go out into the world. And so a neighbor of mine got me an interview at Discover Financial. Now this was so long ago that she worked in the voice authorization department. So it's when people would still answer the phones from merchants to give approval instead of it being electronic. And so when I went in for the interview, interview went fine, but back in the day they'd give you a typing test. And so I did okay on the typing test, but I was not speedy. And so the jobs that were open were a data entry person and human resources and a collector.
Because of my score on the typing test, they're like we think you'd be perfect for collections And so next thing you know, I'm a 30-day collector on the phones at DiscoverCard and Now I could have been an HR executive, but I'm now a collection executive and so now fast forward 35 years and I've worked for three lenders for collection agencies and Like you said, I'm currently with Enova and the head of their consumer collections. I'm actually building a collection agency and a debt buyer. And I've been there about two years and we've now been in production for about six months.
Adam Parks (03:41)
Wow, so I mean, you guys both have a lot of experience coming from the banking side. Clearly you've had a lot of access to data and information, both working for your own organizations, right on the creditor side doing consulting and even working for the agency side of the world as well, which is why I think you guys are two perfect guests for today's conversation. And as we were preparing for today's webinar, one of the things that we were talking about was that over a two year period, 60 % of data is decaying. And the rate of data decay is something that is something we don't talk about enough in the debt collection industry. So maybe you guys could talk a little about how you view data decay and why you've prioritized building data waterfalls for your collections operations.
Mike Walter (04:31)
I'll go first prince if you don't mind sure so
Prince Matharu (04:32)
Yeah, go ahead.
Mike Walter (04:35)
While it seems easy, a single vendor can't source all of your data or give you a profile of your customers. You really need vendor diversity, because vendor diversity is gonna equal data diversity and coverage. Because if you only do one, two or three vendors, depending on the asset class you're in, whether it's prime, near prime or subprime, you're just not gonna get the information. I heard a statistic recently that said relying on a single source of information can be unreliable because up to 33 % of the skip trace data may not match the right person you're looking for. So I've, in my last organization that I was at for 10 years, which was a contingency collection agency that worked in government, healthcare and utility. And now that I'm in the near and subprime space, I always now consider it a cost of doing business to have a very robust waterfall with five to six data vendors. So I can get coverage depending on the asset class you're looking in.
Adam Parks (05:29)
Sounds rather complex.
Prince Matharu (05:29)
Yeah, I would second that Mike, even I believe it was TransUnion who published a study that the client data on any system becomes, 70 % of the client data becomes obsolete within 18 months. So we've got multi-threaded sources now confirming that or challenging the notion that if a company is deciding to just work their client data or the native data with maybe a single source of append, it's a huge risk that they're undertaking because the data is not only becoming obsolete, it's directly ties to their potential revenue generation. It ties to how well they can build processes around the data and then all the way down to resource management. So if. For example, we all have come across the collection floor. If you're feeding even a good collector bad data, their results are being impacted, meaning now they are not as successful as they can be. So the retention of good, talented collectors all the way down to that point is being impacted by having obsolete or outdated data. So it's something that's very critical and it hits all components of a business.
Adam Parks (06:47)
Even beyond the collectors, would think that it would have a direct impact on, let's say, AI opportunities and operations, any kind of digital communications, email addresses, because if the phone number is bad, the email is probably bad. And we know that consumers, they may have one mobile phone or one mobile phone number, but they have many email addresses. think the last I heard, it was an average of four.
Prince Matharu (07:11)
Yeah, between three and four. And you're absolutely right, Adam. I think as an industry or vertical, we're at a place where every buyer, agency, or any operator within the third party vertical is now investing heavily into either omni-channel solutions, digital platforms, digital communication strategies, and everybody's building some sort of AI capacity within their organization, which is fantastic because I think, again, as a vertical, we sometimes tend to lag behind and worry about adapting to technological advancements. But at that junction, it becomes even more critical because your tech stack or your digital communication processes are only as good as the data you're feeding it. If you're still not feeding the most optimized and efficient data strategies upfront then your tech stack's not gonna perform. And we've seen that. As a company, we've been in the data strategy and optimized water flow sector now for over 10 years. And I've got plethora of case studies that can demonstrate how data touches every aspect of your business. And I'm sure, you've got personal experiences and how you've experienced that yourself.
Mike Walter (08:24)
Absolutely. At my old agency where I had 150 different clients, the data that the client gave us varied so greatly. In the healthcare space, the data was pretty good because they wanted the patient information to be updated. On the government side, in that vertical, the data not so good. And so you sometimes would have returned mail of a single percent on the healthcare side, but in the government side, it could have been double digits and it was state taxes, which are very lucrative. And so you really needed to make sure you would invest in data to bring it about. Now in my current role, we're captive to our parent organizations. So we work near and subprime debt, but since we're primarily an online lender, While our emails are fantastic and our cell phones are fantastic, I would say our physical addresses aren't that great. But I rely on the physical addresses because I can only collect in the states that I'm licensed in and I need to understand where they physically reside to make sure I'm doing okay. So I need to worry about emails, phones, but also the physical addresses because of course you need to get the model validation notice out to all your folks, give them the rights, and make sure you can collect the data. So sometimes you should spend time looking at your contingency agencies, looking at the clients data if you're a debt buyer looking at the different debt sale files, but it's all very different. I'm sure prints has people coming to him saying, Hey, which vendor do you want me to use? And it really depends on what space you're in, what asset class or your near prime or you subprime because they all specialize in something different. So you really need to dig into the details.
Prince Matharu (10:01)
100 % Mike, one of the biggest myths and key takeaways for today that if I could pass on would be that no single vendor has complete coverage of your file. Again, from doing this for close to 10 years and personally consulting for major bureaus, we've seen this time and time again. You can have one dominant or two dominant sources that are yielding you the highest on your portfolio for a number of reasons. And some of those might highlight it. could be the demographic, could be your balance size, could be the vertical, your asset class, et cetera. But there is no, if today you operate within the third party world and you are single threaded as a data vendor, or you're not doing any append at all. That should be a number one priority that needs addressing today because that has such ripple effect on all the processes, resources, your tech stack and all the other current and future investments you're making today. So that cannot be overemphasized in my opinion.
Adam Parks (11:09)
Now organizations, when they need multiple data sources, we're all trying to get the right vendor with the right data at the right time or the right point in our account life cycle. But how does that play out in reality for organizations? And I'm sure there's a spectrum here between organizations that are only using one or two data providers and those that are using multiple, but talk to me about how that plays out in reality.
Mike Walter (11:36)
some good info here. So interesting enough, I.
Prince Matharu (11:38)
Go ahead, Mike.
Mike Walter (11:39)
I worked with Prince and my previous agencies and we were strictly third party collections and we set up our waterfall with six different vendors, I believe. At my new organization, I'm setting up five different vendors. But now that I've come on board, our first party collection area is very interested in doing a skip trace waterfall, but they want to do it at the 30 and 60 day delinquency bucket, not at after charge off and not 180 days later. And so the considerations they need to take when they implement are a little bit different than mine. know, what's the, they're going to see versus something on my side. they really want to compare like what I've done on my side. But at the end of the day, they really needed to look at it a little bit separately because in a lot of cases, these accounts could be much more aged. The information they have could be much better. And so even the use case within your own organization where the age is different, you need to take that ⁓ into consideration.
Prince Matharu (12:31)
Thank Yeah, 100%. I think to Mike's example, within the organization, you have different criteria that dictates which vendors you use. So it could be the age of the account, be the overall account balance, how much margin you have so that we can carve out some space for your data spent. But I think even from a macro standpoint and from a process standpoint, there's a couple of critical things. A, if you need a competitive advantage, if you're competing on a scorecard, et cetera then again, we need to put the data strategies under microscope and say, where is the data coming from? Because again, there's no single vendor today that you know of as a company that dictates high yield for you. So we need to, as an organization, I recommend quickly get to a point where we can say, we have these top two, top three sources that can result. So now you're empowered enough to say, okay, I'm gonna build data strategies with high yield vendors, but then backing that you need additional sources because you need alternative data sources as well. Today the file completion or the coverage is not just depending on your credit report or credit data. You need alternative sources, especially if you're playing in the near prime or subprime vertical. That becomes even more critical. And then you have other sources. Again, you have to keep in mind that the data vendors are also not sitting on the door, they're very aware and thankfully so that they're aware of the AI augmentation, they're aware of the social hygiene that is critical around the data and how that's going to feed their historic data model, their algorithms, their match logics, etc. So we have now known providers that are tying their historic credit fed algorithms with information from retailers, from telco providers that are in the back and validating that, okay, this is Adam's phone number and Adam used Uber Eats yesterday and it was delivered to X address. And so that is now being fed into their out from a data augmentation standpoint into the algorithm that's strengthening the validity of a particular output for a consumer.
So everybody's now carrying this digital ID, which now needs to be sourced. again, we need to be as collectively as an industry, be aware of these things and find out which vendor is doing that. And it's a dynamic puzzle, right? If vendor A is doing that well today, that doesn't guarantee that 12 months from now, they're going to be the only person, only vendor doing that or doing it well. So I think from a macro standpoint point on top of what Mike mentioned, these are the things that we now need to start asking around and be aware of that that's available too. So how do I add validation and additional verification to the data that I'm purchasing?
Mike Walter (15:35)
And I'd also say.
Adam Parks (15:35)
It's very interesting. Good.
Mike Walter (15:39)
I'd also say that everybody should be careful because you had success with one vendor or a group of vendors at one organization with an asset class doesn't mean you'll be successful with those same vendors on a new asset class. And what you really need to do while it's great to go ask prints directionally, tell me who's good in this space. You can start there, but you really need to test and continue to go forward. So in the near and subprime space, for example, Organizations like Clarity, which is a subsidiary of Experian or LexisNexis Risk Solutions. Those are good organizations because I came from the prime space and so I'm learning things being in the near and subprime space that I've never seen before. So just not all vendors are created equal, but also they don't all perform the same in different asset classes. So you really need to look at that. And the only way to make sure you're picking the right one for your organization is test it and have the actual data to compare.
Prince Matharu (16:29)
Right.
Adam Parks (16:30)
So in a conference earlier this year, I was talking with an unnamed creditor. And one of the things that they mentioned to me was that, the better that I get at my job as a creditor, the more difficult your job becomes as a third party collector. I can see the truth in that because as the creditors deploy artificial intelligence and data sets and tools, they're going to continue to get better and less is going to fall to charge off, which I think is the goal of the creditor.
Having worked on both sides of, or currently working on both sides of that equation, talk to me a little about that statement and what that means to you. Do you think that that's a true statement? And how does third party collections prepare themselves to get that much better, to be able to continue to perform at the levels in which they expect?
Prince Matharu (17:16)
I think we're in my, go ahead Mike, go ahead.
Mike Walter (17:19)
I have two different answers. The answer is if you're working for a contingency agency of multiple clients, think a lot of, if the agency is being truthful, they would really like the original creditor to provide better data to make their lives easier from a servicing perspective, at least initially. And then if you need to skip trace from a maintenance and a triage perspective, that would be an ideal world. At least you can get a baseline and go forward in my new world since I'm captive to our parent organization, I'm in the best of both worlds. I'm spending a lot of time focusing on the data that the original creditor, my parent company has provided. And so it's a vested interest of me for the organization to do better, to make recommendations to say, you should be doing bankruptcy scrubs, you should be doing deceased scrubs. You should make sure that at any point a customer can update their information to make sure it's the best information and it's scrubbed and it's structured. And that way, when the account charges off and I'm going to service it, it then doesn't make the lift on my side near as difficult. So I'm like in a great space right now, but depending on where you are, you should be asking those questions.
Prince Matharu (18:25)
Yeah, 100 % and where my head goes is if I understand the statement correctly for credit for creditor or a data vendor rather is doing their job well, the job becomes harder for the agency or the debt buyer in that understanding all the strengths a particular data vendor or multitude of data vendors hold. So things like, you know, how can I leverage all the capacity from your data vendors and data partners? I.e. we can have a portfolio that's performing optimally, but then it's on us to inquire further to say what other capacity can be deployed, i.e. vendors now have very strong data augmentation ways or methods such as improving the overall quality of a portfolio that might be underperforming or might be optimal or suboptimal. Things like SSN appends. So missing data birth augmentation, things like things that are going to overall lift the quality, which is then going to result into better matching and better output of data. So that's one comment I'll make that we need to be aware of. And the second thing would be in the world where the data now from a quality standpoint is optimized, the second emphasis now shifts into building insights in-house. So having optimal reporting to say, we deployed X from data standpoint and data augmentation standpoint, what was the subsequent result and where was that felt? Because a lot of the times when I'm interacting with prospects or clients, one of the big obstacles is that we still don't know what the sort of one, two punches when it comes to data deployment and its effect and where is that effect being felt? Most of the clients that we interact with initially still cannot tell us what their overall baseline right party contact rate is, which is, you you would consider that to be a fundamental insight or fundamental measure. So things like that, I think the onus becomes heavier on building insight and then reverse engineering the data augmentation. So it becomes again that one to punch of having a multi-threaded data strategy and then building insight to say, okay, X vendor performed consistently well on the last six batches. Why is that? Are we doing something unique with that source? Or do they have some unique information that is leading us to perform like that? And then going to other sources to say, do you have this or are you lacking this, right sizing our business? Because again, from a commercial standpoint, if we have data cost, we now need to start questioning the rate of return associated to that data cost and not just treat it as a tick box that, yeah, we're buying data from one single vendor and that's good enough. I think that the onus becomes even heavier to your point, Adam, if now all of a sudden the data quality is that much greater and better.
Adam Parks (21:40)
So as we think about all of these different data sources and how these waterfalls come together, if the creditors buying data and if we're buying data from multiple sources, how are we managing that overlap and avoiding that duplication to ensure that we're optimizing our data spend?
Mike Walter (21:56)
I'll say from a client perspective, it's nice working with TEC, both my previous organization here is. when you set up phone appends address appends, they can, they can dedupe everything for you because they're getting all the files back from my five vendors. So I'm not going to pay for the same phone number from a different vendor. And so they really, it's the easy button. If you remember, if you remember that commercial. And so it's very quite helpful to do that. I've, I've tried to do a waterfall without a company like TEC before. And it's extremely difficult because then you have to set up the infrastructure, the deduping and things of that nature. And so it's really nice to use a service provider for that so you don't have to run into those headaches.
Prince Matharu (22:36)
Yeah, thanks, Mike. I think that's critical. we've, over the past 10 years, we've tweaked and building a tech stack that can be easily deployed. It's vendor agnostic, it's software agnostic, and critically controlling the config. So we make two requests. So first we want to request the participating data vendors to only return unique or hits that are appropriate that has some recency bias so that there is some first-in-last-in components. So requesting that configuration, knowing that these are variables that can be controlled is critical. But then also secondly, as a catch-all, enforcing strict dedupeing. And it's relatively easier from once you have the right technology like TEC or partner in TEC to do like we can dedupe up to 25 numbers. So we're only now getting unique data set that can be operationalized and see the results. But we take the same mentality to other data append elements as well, such as addresses, slightly more complicated because you can have slight variation even after an NCO standardization of apartment not being spelled out instead it's APT or has a hash, et cetera, but still we can get it to a higher percent where north of 80 % of the time, we can dedupe it successfully. And because the cost of not having unique data, especially for things like addresses where you're now mailing letter and incurring that cost, can quickly add up. So I think that becomes critical. And thankfully, data vendors are aware of that as well. So they're standardizing on their side and they work with you. It just comes down to technological functionality and having either TEC or partner like TEC that can dedupe it very strictly.
Adam Parks (24:23)
Well, either you have your own data scientists or you need to bring some data scientists in, in order to make that function because taking raw data and turning it into actionable intelligence is never easy. And then when you consider all of the layers of duplication that are possible, I think it gets even more complex. So walk me through how you view the problem of, let's say, digital communications. An organization is trying to improve their digital strategy. It's time for them to start fueling it because the car don't go very far without some gasoline in it. The gasoline being the data. what's step one look like? Where do you even start this process to build out something?
Prince Matharu (25:02)
think for me, step one, and I'm not a lawyer, I don't pretend to be one, but from my understanding around the compliance guardrails is, or the step one for me is putting all of your communication strategy under the microscope and seeing where can we inject digital strategies because I think as an industry again, we're quite often very hesitant in approaching digital strategies that we still think that using email for example instead of traditional mail is going to land us in hot waters whereas in my discussions with lawyers I have been advised to the contrary that there is no evidence suggesting from their experience that it is a heavy compliance risk. Again I'm not a lawyer but that's what the that's the advice or recommendation or guidelines that have been shared with me. So I think that would be step one to say, because not only it's going to impact your overall timeline. So instead of sending a traditional mail and waiting X number of days, you can accomplish that communication with an email strategy quite quickly. So your overall liquidation cycle can be accelerated if we do that. Then there is the overall cost element. So in traditional running with that example, traditional mail is going to cost a lot more than email, for example. And it doesn't mean that we just stop doing traditional mail, but the fun is in understanding how do we dissect the portfolio where we can slowly start testing with these digital strategies and figure out, what is the successful recipe for us and our individual business and individual need and our individual customer lifecycle and starting slow.
There are a lot of great options today in terms of highly credible, validated data so that your things with email, such as your delivery rates, your open rates, all of those are very, very successful. That's defined as how many people that you're emailing are opening and being successfully delivered. So I think as a business, step one for me would be scrutinizing the existing consumer lifecycle, overlaying that with your existing communication strategies and saying, okay, where can I inject the digital strategy? Because once you clear on that, then the next step becomes, okay, I need now email as step one, who has the best emails and why, right? That would be the step one. And again, companies like TEC or if you have internal expertise around that, that can be very helpful at that step to understand, okay, who are the options, which data vendors what are their product individual specific product strengths and now can I deploy them and that but that would be step two.
Adam Parks (27:49)
So without sharing any secret sauce here, Mike, I did have a question come through that I just wanted to float out there, which was, do you know how much it costs you to get an RPC after you run it through five vendors? And I think I see where this question is going is like, aren't you spending too much money in order to get a right party contact? But I feel like there's a more simplistic approach here. Can you speak to that?
Mike Walter (27:50)
Certainly Sure, I won't give specifically a number because we're still testing out. I've only been in production six months, but I would say if the scenario was I can only use one or two vendors and I get no hit on the first vendor and I don't have a second vendor to go to, then I have no opportunity to get a right party contact there. So at the end of the day, you're only using a second, third, fourth, fifth vendor for the unsuccessful hits from the proceeding.
Prince Matharu (28:39)
So, thank
Mike Walter (28:39)
vendors in the waterfall. So when you look at it at the account level, the cost will not be that much more. It will be the same because at the end of the day, you're looking for incremental phone numbers. You're looking for incremental right party contacts and therefore, know, collected. So I would say the more vendors that can perform, that's what you want to hire. So if you have five that do well, then keep five. If you only have four, then maybe you want to cut the fifth off, but then you need to dig deep into seeing what they're doing from a, you know, what they're returning. And back to a previous point, Adam, I wanted to make is the first step I would take is I would partner with your internal analytics team if you're lucky enough to have an internal analytics team.
Prince Matharu (28:59)
Okay.
Mike Walter (29:20)
One of the first things I did when I got here was take a look at the quality of the phone numbers we're getting and looking at things like which phone numbers had a right party contact, which ones had a proxy for a right party contact, say talking to a third party. Also looking at your telecom data to say, hey, were the result codes you got from the telecom data, were they valid or not valid? And so at the end of the day, when I buy a new file, at a minimum, I'm skip tracing at least six or 7 % of the portfolio, even if I have a phone number.
Prince Matharu (29:21)
you you
Mike Walter (29:49)
Because my analytics team created a scoring model for me to say how good is the phone number based on right party contacts and other factors when you're skip tracing. And so, you the days are gone where you just say, I'm only gonna skip on phone numbers where I don't have a phone number, or I'm only gonna skip trace on phone numbers where maybe I haven't had a right party contact in X number of days. You should really evaluate all your phone numbers and do it over time.
Adam Parks (30:10)
Very interesting approach. As we think about these data waterfalls and this entire process, do you feel like the use and management of data has become a differentiator between collection agencies or between creditors? Do you see that as a larger differentiator these days, especially given the increase in the, what's called the velocity of data decaying?
Prince Matharu (30:37)
I do 100%. I think it's going to rather sound like a blanket statement, but most of the successful companies that I interact with are heavily invested in understanding where good data resides, which providers and how to deploy successfully. It's 100 % in my experience, a competitive advantage that that's giving them the added niche. Keeping in mind that today, agencies and that buyers are, it's very, very competitive where the margins are not perhaps what they used to be. The inventory, inventory available has been impacted. So you're competing on very thin margins. So data can absolutely be that competitive advantage and not only accelerating the overall success, but at the speed at which you're able to liquidate, which is again, a huge competitive advantage. So I think in my experience,100 % it's becoming even more critical as we deploy digital strategies. To my former point, your tech stack's only as good as the data you're feeding it. So if you are today investing or even entertaining that you're to switch CRMs or work with a different telephony system, et cetera, at any junction where you're considering investing in technology, please consider what data will be fed to that tech and the subsequent results because it goes hand in hand.
Mike Walter (32:07)
would say that especially in the contingency space, most clients that are true partners know how good their data is and not how good their data is and they'll and they'll let you know they've come to you as a third party collection agency because they expect your expertise and that you have a robust waterfall that uses multiple data sources. So a lot of times they can pinpoint the weaknesses in their data and they want you to come up with a solution to fix that because they don't have those capabilities internally for whatever the reason that is. It's because maybe they've been around for 40 or 50 years and they just can't get the project prioritized to get their data a bit better and so they rely on a collection agency to do that for them. And so it's very incumbent upon us to make that a selling
Prince Matharu (32:46)
Okay.
Mike Walter (32:49)
point. And I've seen that time and time again in my previous life as a contingency agency where this was a competitive advantage where you had to have a very robust waterfall to provide the data. That was almost as important as the overall recovery rate that you were trying to sell to the customer.
Adam Parks (33:04)
Interesting. And now that we think about the types of data and we've talked about kind of the right data for the right product sets, and I'm sure also the right point in the life cycle, but do you see big differences between buying verified versus unverified data and the ROI that you would expect from those data purchases?
Prince Matharu (33:26)
This is a subject close to my heart because I'm a standard nerd at heart and I like to understand because verified products historically and even now are much higher cost. So my intention has been in the world where the capacity for providers to verify information is increased by fold and AI may play a role in that in the coming years. What's the suggested ROI and to our earlier audience question around cost per RPC, how does that play out?
And the short answer is we today have providers that can even supply you a verified phone number. So because traditionally verified products have been in its own arena, plates of employment, et cetera. But now we have providers who are successfully providing phone numbers, addresses. you can really invest or increase even proportionately your data cost to procure some of the verified data. And this might not be 100 % of your strategy. also, it can be deployed at the later stages of your strategy or high risk, margin accounts. But I would say the right combination is we run traditional strategies and a waterfall method containing multi-benders for 60 to 80 % of your portfolio. Carve out the 20 % where we can deploy which we know and it would depend on the company and what they're working with, but something that's high risk or where speed matters, where you're competing on scorecards, so you only get 30 days to collect or maximize your liquidation. I think that is where the verified piece really makes sense because you can invest heavily, but your return is going to be multiplied as well. So that would be my notion between verified and non-verified.
Adam Parks (35:04)
you
Mike Walter (35:21)
Typically in the first party space, I use verified data prior to charge off only from the standpoint that, you're gonna write the entire balance off your balance sheet and it's worth it if you pick the right accounts to get the verified data on. And then when you move over to the third party space, I predominantly use verified data on accounts when you need to go down the legal channel. So I tend to agree with prints. 80%, I would say on the non-verified data, 20 % for the verified data, depending on the use case.
Adam Parks (35:51)
Yeah, I'm seeing another question here from the same person really trying to dig into a specific cost, which I don't think is something that we can really do here because the right party contact, which vendors you're choosing and how that flows is going to really depend on just too many different variables for you guys to be throwing out numbers. At least that's my opinion.
Prince Matharu (36:11)
I think what I would say maybe which will be helpful from every case study that we have run where prospect or the client came to us with while using either single vendor or maybe two vendors we when we deployed a multi-vendor systematized because please keep in mind that you the game the name of the game isn't just adding multiple vendors so that's only one piece. The second piece is, and the analogy I use is sort of recipe and ingredients. You can have the same ingredients, but apply a wrong recipe and you're end up with a different dish. it's equally important to know where those five enter and how to stack them, i.e. who's gonna get first attempt, who's gonna be second, who's gonna be third, fourth, and fifth. The way we tackle that is we initiate with a champion challenger where we give each one of the participating vendors equal opportunity when it comes to inventory in first position and a subsequent position thereafter, because again, we have the technology to orchestrate these things. So given that, once you do that, then that's where you really drive at the cost per RPC. can monitor the incremental lift after deduping how much your data cost is truly going up, because again, what we're not doing is say we have a hundred accounts and we're sending those hundred accounts, all of the hundred accounts to all five vendors. Cause that would really, you can do that, but that's going to really blow out your data spend and your cost per RPC is going to be incredibly high versus what we're recommending is have a systematized waterfall where you're only going for unique hits and then dialing. So
Adam Parks (37:44)
Sounds very expensive.
Prince Matharu (37:59)
Once we deploy that, what we've seen time and time again, that we see anywhere from 20 to 25 % lift in the overall contact rate. So your right party contact rate increases once that model is deployed, but on average from our clients experience, 20 to 25%. So this kind of gives you the Delta between your current cost, how much incremental costs and obviously it's a proportionate lift. But at the end of that, what we see is 20 to 25 % lift in contact. Now you can run a commercial, run a model around, okay, what does that lift mean to me? If you're a first party, then you're curing the whole entire balance. But if you're a third party, then you can factor in your margins and overall gross dollars versus fee dollars and then quickly get to the overall lift.
Adam Parks (38:49)
Interesting approach. I like what you're saying here. Why do you think so many agencies are lacking real visibility into their vendor performance? Is it a technology limitation? it, I mean, most organizations don't have a dedicated data scientist inside their company. So what do you see as the challenge that the industry is facing?
Mike Walter (39:09)
take that one first Prince I would say even even at my last agency and we're a medium-sized agency we did have a data dedicated data scientist but we still found value in employing a company like TEC to do this work for you so they could focus on other scoring initiatives like a revenue attribution model so you can measure your net in gross ⁓ liquidations by channel, whether it's email, whether it's texting, whether it's direct drop voicemail, something of that nature. So we found it very helpful to still...
Prince Matharu (39:31)
Okay.
Mike Walter (39:45)
partner with your analytics team and then have them partner with a company like TEC so they can get you to the right answer.
Prince Matharu (39:52)
Yeah, thank you, Mike. And I think my experience tells me that it's our conditioning as an industry that happened over the years around. Data has not been a priority for most of us. And I think that's a fundamental difference in approach or philosophy that we've adopted. Traditionally, what we see is, data is almost considered a commodity and it's almost that checkbox that, okay, just buy from one vendor, all data is shared, it's identical, and yes, we do append, and that's that. Reporting on what comes at the other end has not been a priority, whereas I think the shift is slowly occurring where Companies are becoming more aware of how data spills over every single process of the business all the way down to the bottom line revenue. the insights and insights driven decision making is now becoming important. And it's evident from our case study and our clients and whoever focuses on data wins. So why it's lacking, it's just a philosophical difference. Historically, nobody really focused on it and we've just adopted throw things at the wall and see what sticks approach. I think that's becoming hard to keep on with that as time goes by.
Mike Walter (41:13)
I think I'll give you one more, maybe a practical answer just of running multiple agencies is. When I took a, take a look at what I've done over the last two years, I had to buy a collection system and get it licensed. I had to get licensed in every state that required it, whether it was as a lender because I'm a debt buyer or as a collection agency. I had to build a scoring model with our analytics team. I needed to hire collectors. had to integrate a dialer. The list goes on. There's 25 things that I had to do before I even got to, oh, I need to hire some skip vendors to append my phone numbers and addresses and make sure the data is really good.
Prince Matharu (41:47)
Okay.
Mike Walter (41:48)
And so again, I was looking not for the easy button per se, but I was looking for a partner to help me do that item while I focused on the other things to get them done. And so I think it is not always the number one priority when you compare it to all those other things I just mentioned, if you compare it to what do I need to do to set up an agency or a debt buyer.
Prince Matharu (42:05)
Right. Yeah.
Adam Parks (42:09)
also makes a lot of sense. You know, when we think about testing and measuring and as we're working across all of these different data vendors and even the order of operations, right, the most important thing in math is what order are we doing these things in? One of the things that I've always struggled with is getting organizations to define success before we start testing a new piece of data. I always struggle with, you know, setting the target once we've already taken our shot. How do you
Prince Matharu (42:33)
you
Adam Parks (42:37)
communicate to organizations the value in establishing the target, establishing success or defining and agreeing to what success looks like before the data test starts so that you can solidify what the outcome really is versus things changing course midstream.
Prince Matharu (42:57)
From my perspective, we have the luxury of running, being in the thick and thin of it. as a reference point last year, we processed over 50 million transactions across multiple clients and multiple data vendors. So both subjectively and objectively, we know which data vendors are providing what level of success in a particular asset class or vertical. So we can start with that and work with a client to say, if you're collecting on medical debt, after deploying a multi-vendor waterfall strategy, optimized workflows, you should be garnering X in terms of your overall collection rate to create a baseline. So I think that's a key advantage that we have. So we can quickly detect if a client is underperforming or they're at baseline, or in some cases, which I have not seen often, to be honest, overperforming the baseline. So I think that would be the starting point.
And then it again depends on internally what processes were deployed, what was the data append processes like, and then what's the existing contact. by the way, contact is not only sort of the be all and end all, because there are other factors, right? There is your overall coverage of the file, how hard do you have to work? Because again, in multi-vendor strategy setup, you can work particular vendor harder in terms of dials, which proportionally has a correlation to the contact rate. So things like the other metrics that are of equal importance, which need to be scrutinized equally to get to the final measures that where you can hang your hat on, either be at contact rate or return rate for mail, et cetera. But I think from my perspective, we process so much data that we can generate subjective and objective outlines for almost any class that's out there.
Mike Walter (44:46)
I would say that, know, TEC is fantastic to partner with because they have an excellent dashboard that provides the metrics that Prince is talking about. If there's not a metric you'd like to see, I've asked for additional data. And as long as I provide that data in the skip trace activity file, so they can then provide the view that I'm looking for, they can do that. So I'd say along this path, since it's the second time I've used TEC, we've been pretty much on the same page as far as the success criteria of what we're looking for.
Adam Parks (45:17)
How you avoid overloading the variables in any given test? Because we're always in such a rush to get things done that sometimes we were testing for maybe too many things which can muddy the waters in our results. Any advice on avoiding that challenge?
Prince Matharu (45:18)
You For me, it would be control the variables that you can control and control them well. So meaning, let's say, we can only start with four different vendors. Start with controlling config so that it matches across all vendors. So your request for each vendor is going to be, me two best phones, single best phone after deduping. So make that universal. That's your universal configuration for all participating vendors. Let's start from there. Then once the data is deployed and now you're receiving hits coming back, have a set standard or again, control the variables that you can control. If you're gonna run a campaign with 10 second, 15 second preview, make that universal across all data that's coming back and decide on how many dials you're gonna do and make sure that's the number that's hit across the board, right? So you don't end up with one vendor getting 30 dials per phone and the other vendor is now only getting five. Again, there is direct correlation. So I think again, the focus from my standpoint or the advice I'll give, write down these two or three big variables, control them and control them well, and then focus on one or two key metrics. In a phone scenario, that could be your overall contact rate and your overall liquidation. Because I've seen cases where you have vendors with the same contact rate, but the liquidation, there is big disparity. Because what that is telling you is that one vendor is specializing in finding accounts or people that generally you've either struggled with or they have some selective source that is generating high liquidation, even though it's not always true because you don't fully control liquidation. But those would be the things that I would focus on name or list a of the variables that you can control and control them well.
Adam Parks (47:29)
I like that. Yeah, I have no follow up for that one, Prince. I appreciate that explanation. But going into our final minutes here, guys, I just had one more question for you about, you know, what's the number one myth that organizations or clients are bringing up to you about data that you would like to dispel?
Mike Walter (47:29)
All stated. Nothing to add.
Prince Matharu (47:48)
Go ahead, Mike, and then I'll share mine.
Mike Walter (47:51)
I would say when you're working with a client, especially on a contingency basis, that sometimes there's an expectation that the results you get are not dependent on the data or the quality of the data they want. They want a certain result. And so part of your job as a partner to your client is to educate them on what is their data quality, what you need to do to hygiene it, and how can they partner with you to make it better, more reliable so we can get a better a better result for them. And so instead of the client saying, is your problem, it's really partnering with the client to make sure that they understand that the quality of their data is going to be a direct result of what we as a servicer can do when we give them the final result, whether it's a contact rate, dollars collected per right party contact, or overall liquidation.
Prince Matharu (48:46)
Right, agreed. I think I've got two myths that if I could debunk or provide commentary on, and these are all statistically backed for me for working with TEC in my role for the last 10 years. One is that any particular data vendor, if you're isolating yourself to one single vendor the notion or the myth around that that single vendor is going to have complete coverage of your file. I've never seen that and I don't expect to see it. It's never happened. So that would be number one myth. And I think the second myth would be if you deploy multiple vendors, the only thing that's increasing is your data cost. It also has never been true in my case. Yes, your data costs will incrementally go up, but I have seen proportionate results in terms of, again, whatever the metric is, if it's liquidation, if it's contact rates whatever the metric defining metric is, I have seen proportionally higher yield and higher improvements in those metrics and not just the data costs. So the myth around multiple sources are sort of redundant and they don't add any value to your success metrics. And it's just injecting data costs. That's never been true in my experience as well.
Mike Walter (49:58)
also say that when you're working with your data vendors and TEC will help you with that, but at end of the day, you're going to do the negotiations. You're going to do the contract is some of the sales and business development people do feel right or wrong that they can provide all the data you need in a single source. Now, Sometimes the better ones will realize that they have a certain niche and in a certain area and that's where they can really excel. But I'm very honest with the data vendors. let them know it's going to be a head to head that we're going to split the file between them. We're going to see who performs the best and who's going to get in position one. And the vendors that really want to partner with you will try to understand where maybe their data is not as good and they'll try to enrich the data. But I would say when you engage the vendor, just don't work with the sales or business development person some of them are really good. Try to get the sales engineer, try to get a data person, try to get the equivalent of a prince in the background so you can ask the right questions to make sure you know the data that you're getting and you understand what's going on.
Prince Matharu (50:56)
create.
Adam Parks (50:56)
Gentlemen, this has been a fantastic discussion today. I could not have, I mean, I learned a lot through this discussion today. So thank you so much for coming on, sharing your insights and participating with me today. This has been fantastic.
Prince Matharu (51:10)
Thank you, thank you gents. Glad I got to do it with you. A lot of experience here, especially Mike. Thank you for sharing your perspective and Adam, thank you for facilitating. This was fun.
Adam Parks (51:19)
Of course. I look forward to our next event together. So if you guys could stick around and we end here for a minute, I want to make sure that we debrief. for those of you that are watching, if you have additional questions for Prince or Mike, you can leave those in the comments here on LinkedIn. And we'll be posting the replay to YouTube so that you can share that with other people in your organization or your friends and colleagues throughout the industry. But until next time, guys, thank you so much for.
Adam Parks (51:41)
Joining me today sharing your insights and thank you everybody for watching. We appreciate your time and attention. We'll see you all again soon. Bye everyone.
Prince Matharu (51:50)
Thank you all.