In this session, John Bedard of Bedard Law Group and Scott Hamilton of ARM Tech Advisors break down how AI is reshaping operations, from guardrails and risk management to real-world deployment strategies. 

Adam Parks (00:00)

Hello everybody, welcome to an episode of the AI Journey. Very excited today because even in our green room before we joined the conversation today here live, we've been going around and just having some really interesting discussions around artificial intelligence, its application to the debt collection industry, and what that all looks like. 

So very excited today to have John Bedard and to have Scott Hamilton here joining me, talking about going beyond just the demos related to artificial intelligence and trying to better understand that gap between the policies and procedures that we have written and the actual operations of our businesses. So to kick it off here today, starting with you, John, could you tell everyone a little about yourself and how you got to the seat that you're in today?

John Bedard (00:50)

Sure, sure. My name is John Bedard, and I'm an attorney here in Atlanta, Georgia. And our firm represents debt collectors, debt buyers, creditors, and lawyers. We help them stay in compliance with a myriad of laws that regulate their businesses around the country. And we also do their defense litigation when they get sued or investigated by the government or by consumers.

And Adam, I've known you for years, and I appreciate being invited here to chat with you and to chat with Scott today. So thank you.

Adam Parks (01:22)

Greatly appreciate your participation because I think that balance between the courtroom experience representing so many different types of organizations and then having built some compliance AI-related tools yourself really gives you an interesting perspective. 

Scott, how about you? Could you tell us a little bit about yourself?

Scott Hamilton (01:40)

Sure. To the green room comment you had a second ago, retired banker, 25 years or so with five of the top 10 banks, leading sort of transformation strategy type work. Now spun off from all that past and have an advisory firm where we only work in the consumer collection space, predominantly gathering where people are, where they are going and the best practices that they'd like to share with each other. 

And we just basically just gather that and reshare that in order to help the industry transform a little faster and a little safer. So a lot of conversation in this exact topic within that community of over the past year. So excited to be here.

Adam Parks (02:26)

So both of you have had the opportunity to work with a variety of different organizations and from different perspectives. I thought we'd start by kind of talking a little from the, let's call it the highest level about what we're actually seeing. So John, you spend a lot of time protecting the industry and even serving with the ACA's coalition insurance agency on their kind of defense bar.

What do you think the threat level is, is we go into a litigation in our organization is not doing the things that we say that we're doing in our policies and procedures.

John Bedard (03:01)

Well, you know, so there's a difference between the risk level and threat level, right? And so when I hear you say threat level, I'm saying, okay, are the threats coming? You where are they and what do they look like? And quite frankly, we're not seeing a lot of threats related to AI right now. Consumers aren't, or we're not seeing yet, they're not hitting our desk. Consumers aren't complaining about their interaction with AI. We're not seeing the government yet.

Adam Parks (03:05)

Fair.

John Bedard (03:26)

Complaining very much about AI and how industry is using it. It's just not yet hit our desks. It doesn't mean it might not be happening. It just hasn't gotten to the level of which now we need to sort of start litigating it. contrast that with the risk associated with the adoption of this kind of technology and how industry is dealing with that. 

We are seeing the adoption of AI in various forms in the industry and there are risks associated with that. Some are very good at managing those risks, others are not as good. And so I think the risk profile is probably a little bit higher right now than the threat profile. 

But I think only time is gonna tell what happens to that as we, as this just sort of permeates not just our industry, but our entire culture on how this is going to change the way we live and change the way we do business.

Adam Parks (04:30)

Interesting, but the risk level, don't think is necessarily the artificial intelligence, but maybe the risk level exists in the way that we have been doing things. Having live collectors on the phone are not necessarily gonna follow the same systematic rule sets that we might see through an artificial intelligence type technology deployment.

John Bedard (04:53)

Yeah, most definitely. There are ways, and I know we're gonna talk all about it, how we can really control this technology. We can set up guardrails, we can do that. The gap that I'm seeing happening now is that the programmers that create those things are disconnected from the operators that have an idea on what they want the goal to achieve, what they want the tool to achieve.

That's again disconnected in some circumstances to all the compliance people who are responsible for keeping the entire system inside the guide rails.

Adam Parks (05:30)

Scott, I know that you've got some thoughts on this as well as you've helped organizations kind of prepare and deploy this type of technology. What have you seen in terms of preparing those guardrails and trying to close that gap between what's written and what's done?

Scott Hamilton (05:46)

Yeah, a couple of thoughts. I do at the higher level, sort of manual processes are probably inherently bring on different risks and oftentimes more frequent risks, but they're sort of known risks. And we have controls in place that are pretty common to mitigate most of that. But a lot of it is hard to prevent, but easier to detect. 

As this technology comes up, I think the, for the most part, initial concerns around is it compliant? Will it hallucinate? Like, is the tech good enough to keep me out of trouble? I think the answer to that is pretty much yes, it is. But to John's point and your points, Adam, how it's actually designed, built, and guardrailed is where I see the risk, where most of these companies, as smart as they are, when a creditor or a buyer gives them the guardrails, they sometimes don't understand the spirit behind maybe the latter, and they develop it in a certain way. 

So the new muscle that needs to be created to audit and verify and provide rapid feedback loops is a new muscle that we're gonna have to practice in order to leverage the greatness that it brings, but mitigate the downside that can be unseen at large scale unless you sort of build a couple new muscles.

Adam Parks (07:22)

Organizations are going to need to build AI trust within their organization as they deploy these things. And so as I've talked with organizations about, for example, AI voice bots, one of the things that we've talked about is starting with the incoming calls and basically starting with those IVR style conversations and letting the system learn before you start getting into outbound calling and let's call it higher risk levels. 

Have either of you seen organizations deploying the tech in that format in that way?

John Bedard (07:57)

Absolutely. Yeah. And in fact, that's actually how we recommend the clients who come to us and say, hey, we'd like to start with this technology. We see benefit. How do you recommend we start? And you know, I always, baby steps, let's take baby steps and let's try it on the inbound side first. You're leaving money on the table. Let's try it on the inbound and let's get good at it. We're going to learn so much about your organization. 

You're going to learn so much about your technology when you begin to deploy it in a controlled way like that, that it'll make expanding the use of it throughout the organization that much easier once we start in a very small and controlled way.

Scott Hamilton (08:41)

Yeah, I've seen probably 15 production pilots and Adam to your point most of them start in that sort of way with inbound or after hours or You know bucket one or bucket ten like either the front end of the very bad like small dollar like they carve off a portion of the business, they they do an RFP talk to five companies pick one.

IT integration, model tuning, they learn a ton by doing something super small. Now that's the great news, John, to your point that everybody learns what the muscles are they need to build and figure out what's easy and hard. Generally, it ends up being a little harder than people predicted, but there's your first learning. And they generally prove to be more efficient. There's your second learning.

The third learning we're still not through yet is are they more effective at scale over time? And we haven't figured that out yet. But ⁓ I think to John's point, there's a lot of new ways you need to onboard these things, do InfoSec controls, do verification, guardrail definition, like all that sort of stuff is greatness across the board because you've got to figure that out no matter what.

Adam Parks (10:05)

It sounds like it's a rethinking of the policies and procedures related to the organization and what that starts to look like. And as we think about the use cases for artificial intelligence, and I've gone into them in depth in the TransUnion Report and other webinars and podcasts, so I won't go that far, but I'm curious to get your thoughts. 

I've always looked at six use cases for how a collection agency, debt buyer, creditor, law firm can deploy the technology for their internal purposes. Started to realize that there is maybe a seventh use case here which is outsourcing to an AI first organization, whether that's an agency or whatever the case may be and saying, as we're looking at these organizations that are built ground up to deploy this AI technology into the debt collection space and to accomplish the underlying goal, does that become the seventh use case? 

And does that help us from straying from our policies and procedures and maybe having to make that same potential investment into the guardrails that are necessary to keep the technology on track.

Scott Hamilton (11:17)

I don't know if you sort of meant, what I heard you sort of frame out as the choice of, it's a little bit, do you plug in use case by use case into your existing tech stack or do you take a different path of really leaning more on these firms to do more faster, albeit at a smaller scale?

Adam Parks (11:41)

Or is it even a champion challenger situation to where there's a balance between those and we're testing the same way that a creditor is testing their internal collections against the external capabilities?

Scott Hamilton (11:50)

Yeah. Yeah, I think that's where we'll land, is you can't fully integrate or fully empower one of these companies to do everything. Like, that's just not smart. Like the winners aren't that clear, the pricing model's unknown. So you will, like you have multiple agencies today.

My prediction is you'll end up with multiple AI companies, and you'll champion Challenger. Whether you choose to integrate them into your tech stack or do more of a placement file engagement that's lighter lift but less control, I think both are in play. Both have pluses and minuses, and both require new risk management muscles in order to protect yourself against last comment.

I tend to start with what makes it beautiful, which is big data, test and learn, repeat, get better, repeat, test and learn, big data. So to me, that means multi-channel, two-way models that can execute across multiple so that it can learn across multiple so that it's better, faster.

That is a little bit of a leap from doing an inbound after hours test. Like how do you jump from that to that? And I think therein lies a really big open strategic question for the industry.

Adam Parks (13:24)

I think John answered that one with baby steps and I don't want to be all what about Bob about it, but it's baby steps to the elevator than baby steps to the bus, right? We have to take these small incremental steps. 

But John, from your perspective, I know one of the biggest challenges and we talked about this on a recent webinar is organizations that say we don't do that, but that's actually exactly what it is that they're doing. How does the deployment of this type of technology play into that scenario and what do organizations need to be looking for to avoid falling into that, we don't do that pitfall?

John Bedard (14:00)

I wish I had a nickel for every time a client said, oh, we don't do that, John, don't worry about it. So hold on a second now. This suggests otherwise. Then we go, woof, all the evidence is this. Well, maybe we're doing something that we don't know about. And so that, I think, is one of the real growing risks here because what we have observed is organizations that either outright ban it or have refused to actually address it at a company-wide level. 

They're the ones that are at the most risk of the technology being misused because nobody's really paying attention to it because everybody believes we're not doing that and we don't do it. ⁓ That's what we're seeing. Contrast that with the companies that say, we're going to address this head on. 

We're going to actually create a tool that we feel comfortable with or purchase a tool that we feel comfortable with, put the guardrails on it that we believe need to be there, and then let our people use judgment to use it in ways that can help them in their businesses, in their responsibilities. And so I like that approach better because it establishes a company-wide policy.

It establishes a company-wide procedure on how this technology is going to be implemented. it's transparent because the company knows what's happening and it doesn't incentivize everybody at the company to go find an alternative and use it unbeknownst to the people that really need to understand what's happening at the company. 

And so I like that approach. I think it's a ⁓ lot less risk than what I'll call the ostrich approach, which is that we're not doing it. We don't do it. You're not allowed to do it because this is if it's not today, it will be very soon ubiquitous, not just in our companies, but in our everyday lives. We're not going to escape this or avoid it and so the ostrich approach is Is not what we recommend that clients take

Adam Parks (16:08)

So you bring up a really interesting point. It makes me think about the last couple of weeks as we look at Anthropic and the evaluation of the new Mythos model that is currently being held back from release because of its capabilities of pointing out holes in zero-day vulnerabilities across technology platforms across all spaces, not really a debt collection focused problem, but we've even had Jamie Dimon from Chase come out and start talking about the level of cyber attacks that we're expecting to see when that comes into play.

So knowing that we have these kinds of potential threats on the horizon, is there also yet an opportunity here for us to use Judge LLMs or other tool sets to actually evaluate the operations of our organizations in comparison to the policies and procedures that we have deployed across our companies?

I know it's a really general and broad question and mythos is such a new topic that it's...

John Bedard (17:04)

I think that's beyond many of the clients that we help today. We're not getting any questions about that. And I'm just not seeing that concern right now. Notwithstanding how much of a concern it should be or could be, it's just I've not seen it on the radar of our folks yet.

Adam Parks (17:29)

But do you think there's an opportunity here for us to use this same kind of technology to measure ourselves against our policies and procedures? Now that we're capable of processing so much information, the context window, for example, of even anthropic sonnet is thousands and thousands of pages, right? Which is probably equal to or greater than the policy and procedure manuals that we have in place today. Does it then open new opportunities for us to evaluate our organizations against those policy and procedure gaps by looking at the call recording information, the breakdowns of the compliance-related tools and try to get a better handle on that to remove the we don't do that situation and scenario to empower those executives.

Scott Hamilton (18:24)

100%, maybe not with the power that some of the tools can even do today. The ability to aggregate transaction types and disposition and frequency of engagement and outreach and transcripts and all that is, a lot of firms are already doing that today. If you're not, it's not that tough. 

You can ask it to go find compliance breakdowns. More interestingly is when you can start to tease out customer experience extremes and complaints. Widen the aperture to include more negative experiences. If you're still doing manual call listening and checklists, like your toast, or you just have a big opportunity to save a bunch of money and go a lot broader and deeper with the same expense. So for sure, that is the case.

I don't know if that answers your question, Adam, but that's..

Adam Parks (19:19)

It does little bit. I'm just curious about that level of technology. Now that we're, I had a conversation on a last year where we had a whole podcast episode about judge LLMs and the LLMs watching the LLMs. And as I started looking at and hearing about mythos and the things that are concerning to the general economy, I said, well, isn't that also an opportunity for us to improve our penetration testing, to improve our policy and procedure gap analysis?

Now that we can process and understand more information because the it's been my experience and as a reformed compliance professional, and I'm definitely going to use reforms there because it's been 2019 since I was really engaged in day to day policy and procedure management or compliance management. But back then, and I still kind of see a little bit of it from the peripheral today, our compliance programs are built for the regulators, maybe not necessarily to improve our execution. 

I mean, John, you evaluate far more of these in the past, you know, eight years than I have, but do you think that's kind of the case of where we stand as an industry or have we started to migrate more towards compliance for performance improvement as well?

John Bedard (20:36)

There is a large sense in which these programs are built to satisfy government regulators. That's my sense. There are a great deal of companies, however, that really do find production benefit and really do find operational benefit in those same processes.

And so, you know, although they, their birth, the birth of these systems may have been the result of a regulator catalyst. It is now really come to the attention of the industry that there are benefits beyond that as well. And I think that's what, you know, all the folks with mature systems, I think have experienced that.

Scott Hamilton (21:18)

I've seen some really cool, albeit fairly rare, and I think this is down that thread, examples of using these types of tools to uncover complaints, which is a fascinating topic for anybody that wants to go deep on complaints. 

Another one is around agent behavior and go have lunch with people that manage incentive plans at the agent level and how agents can game the system. that's a whole nother use case to play with, Adam. The over-aggressive negotiation skills that are very tough to find are another use case. 

So 100%, Adam, most of our controls are built around policy and procedure compliance, but not so much how they do it from a customer experience or working just shy of the guardrails a bit too often. So yeah.

John Bedard (22:20)

And in fairness, sometimes the rules, often the rules don't make for a nice warm and fuzzy customer experience anyway. So tell me the last part of your social and where you live before I say anything to you. So some of these kinds of rules don't lend themselves to comfortable customer experience. And so we do the best we can with that. But in fairness, sometimes it is a little bit difficult to navigate that.

Adam Parks (22:49)

This is where we get painted into a corner. And this is where, you know, 200 pages of disclosures on a letter is not beneficial to the consumer by any stretch of the imagination, but yet a requirement for so many of us, which, which kind of leads me to another question here, which is whether or not, how should AI be tested differently than human collectors? And should they be audited in the same way? 

Do we need to treat those to that live agent and that AI voice agent the same way? they need to be equal in the eyes of the regulator or should they be equal in the eyes of the operator? I'll kind of split that question between you guys.

John Bedard (23:33)

To me, the real auditing comes in before implementation. It's like, look, we need the compliance folks involved in the creation of the guardrails, right? We need to make sure that we've crossed every T and dotted every I. We expect humans to behave in certain ways to respond to questions. Expect humans not to have to be told, not to say the same thing over and over and over and over again. 

We expect, and so we don't always remember that we have to make sure we build a guardrail so that AI doesn't get caught in a doom loop that creates for a bad customer experience. Whereas doom loops really aren't found in sort of the human behavior kind of thing. so the front end, examination and scrutiny on the front end in building the technology, I think is probably one of the most significant differences between how we're going to treat sort of the humans. 

I mean, we have training and we expect, you know, the humans to consume the training. In another sense, we'll call it training, but it's the building of the guardrails that I think we really need to focus the attention on in terms of identifying differences.

Adam Parks (24:45)

That's an interesting one, John, because I've actually had a conversation with a group very recently that was talking about the projects that they were doing actually training the bots and what kind of human-esque training is necessary in order to build those guardrails and to build that customer experience. 

Because even as you talked about where the regulators are focused and where the risk levels are, you continued to come back to the customer experience, which I think is both a compliance and a performance guardrail overall, like that is the underlying value that we provide as the debt collection industry. And I think it's the biggest opportunity, but go ahead, Scott. I want to hear about it from the operator perspective.

Scott Hamilton (25:26)

No, I think that's right. Defining the guardrails is not a compliance test. In my mind, that's the easy part. We've been doing that forever. We have scorecard. We define those out and we can monitor the bot just like we would a human. I think the more important and the bigger risk is all the customer experience component.

Is it, listening to the consumer, is it overly requestive? Does it not respect the fact that they just said they're in the hospital yesterday because of a car accident and you asked seven times for a payment? Like it, does it tip into harassment or, or unfair negotiation tactics? Like that, that to me is, is harder to guardrail, harder to detect and probably where most of the risk is.

John Bedard (26:21)

Yeah.

Scott Hamilton (26:21)

But again, to John's point, setting up those guardrails as best you can, the AI companies are not experts in this space either. So you've got to audit those new nuances, just like you would a human, but be able to have a vendor or a partner that has a pretty rapid feedback loop. All that said, if you get to the point where it's compliant, it's far more efficient and more effective over the long term, everybody can be more effective in the short term. 

It's whether you burn out your file is really my question. If all those macro KPIs are all green, this becomes sort of a like...

It's already great. Like how much greater do we want it to be? Like, let's not get too wrapped around the axle on some of these subtleties. But you can't go in thinking that's going to happen on day one either.

Adam Parks (27:20)

You know, it's interesting the way that we look at this. Now, we keep talking about the guardrails and the things that we're doing to prepare to release a piece of technology into the wild.

Once it is released and they're operating these AI tools at scale, how do we manage that compliance risk once we've done all the preparation that we can do? It's like raising a child. You do all of the things that you can do to get them ready for college, and then you send them off to college, and the level of control that you have is significantly lesser at scale. Maybe it's a bad analogy, but I'm going to go with it for now.

How do we then monitor these things at scale? How do we manage that at scale? And to which standard are we evaluating it?

Scott Hamilton (28:13)

My reaction is you manage it the same way you're managing your operations team today. You have managers walking the floor, listening for inappropriate behavior, and you have a first line, second line set of controls that are monitoring pretty much the compliance and procedure elements. 

But they're generally, while they might use some tools to do it, they're generally doing call listening and code entry alignment to what happened on the call. My initial reaction is you do it the same way. Just don't, because it's automated, don't let it get out of your site. Instead of having one manager to 20 associates, you're going to have two managers doing the same behavior on a higher percentage of bot conversations. At least at the beginning.

John Bedard (29:01)

And the time for asking that question is not after you've released the technology into the wild because part and parcel of building that technology includes the predictive and the detective controls that you're gonna have in place when that technology gets launched. And so.

We're gonna have answers to those questions, but long before we actually start using the technology. To Scott's point, we're gonna have all these detective controls in place to say, how are we gonna monitor what the bot is doing on the chat portion of our website? How are we gonna monitor the emails that the system sends to consumers? How are we going to monitor all that? 

And those things are going to be in place, and we're going to have ways of aggregating that data and analyzing that data very, very quickly, if not in real time, so that we can see risk before it materializes.

Adam Parks (30:05)

I agree with that statement, but eventually it goes wild, right? Not goes wild, but eventually it goes into the wild and we now have to monitor it from a distance or at a larger scale. And I think the use of the judge LLMs and even using the same call monitoring technology and holding it to the same human standard. To Scott's point, there's more things that we need to look for like the death loops. And there's more things that you don't see in human behavior that we need to monitor for. 

But I think that minimum standard is that human standard and then we have to look at it through an empathetic eye in some way shape or form and whether that's through the use of judge LLMs or stacking those those different models for different purposes so that every you know and it reminds me of casino right and the eye in the sky is watching us all right everybody's watching everybody else and then in the end there's there's kind of the overwatch it's got I know you understand..

Scott Hamilton (31:02)

Yeah, I'm tracking and we need a bingo card. I'm tracking how many movie references and I'm on BIN, but I haven't gotten to GEO yet. So I'll let you know. Yeah.

Adam Parks (31:11)

We're getting close, we're getting close. I'm gonna work through it, I got a few more in here. ⁓

John Bedard (31:16)

One of the big risks, call it platform exit. We don't have a good way for consumers to exit the platform, and that's what creates angry consumers. And that creates a bad customer experience, and that's what gets regulators upset. When we don't have good platform exit, the risk can actually increase exponentially. 

And so when you talk about guardrails, when you talk about this monitoring and one of the very first things we look at is how do consumers get off the platform, get out of this because in our experience and our observation, it makes for a better customer experience when consumers can get out.

Adam Parks (32:00)

It's not like the airlines five years ago, we had to scream operator, operator, operator six times to have it understand that it was time to talk to a human. And I think our industry needs to be significantly more sensitive to it than a of a standard customer service style operation.

John Bedard (32:03)

Great.

The beauty of it, know, this is sort of the world according to John here, the beauty of it is I think very, soon ⁓ we as the consuming public are actually going to want to talk to the bots instead of humans because we're going to get a better experience with the bots because hopefully smart people are designing the guardrails in ways that are going to get us what we need better, faster, cheaper than talking to a human or trying to get connected to a human.

Scott Hamilton (32:44)

It was interesting, John, what you said makes perfect sense, but there's a slight little brain cell that went the wrong direction. And when you were saying, how do we exit this thing, my mind went to ⁓ the deeper you get with one vendor, if you ever need to exit that relationship, what's your back-out plan? 

Do you still have those 80 those agents that you no longer had for the last three months, where are you going to get them back? Or how are you going to replace another solution with the same learning curve fairly quickly and easily? 

My guess is we'll land in a spot where people will have two or three of these in production challenging each other. And just like you have an agency network, you'll have a bot network as with backups and making each other better. But it's interesting. There's risks in not doing a great job at both of..

John Bedard (33:43)

Yeah, you've just uncovered a new business opportunity in this industry, which is some way of standardizing the experience across all kinds of interactions so that we can take all of that and move it to somewhere else.

Adam Parks (34:04)

We just saw that between OpenAI and Anthropic in the not too distant past here, I want to say over the last couple of months where Anthropic actually built a tool set to be able to export whatever knowledge ChatGPT had of you on the OpenAI platform and be able to migrate that to another platform. So that was the first time I had ever really started thinking about that challenge. 

And we saw that happen at scale with the Department of Justice situation and all of it, and just kind of the trust level of OpenAI dropping rather quickly in the last 60 days. But it's been kind of interesting to think about how are you going to be able to move that out? Now, I don't think that organizations are necessarily getting rid of good collectors right now, because good collectors are hard to find and very expensive to train. 

But if you were to drop a piece of technology, what is it going to take for you to get up to that same level of necessary scale? And where are you going to hire those folks? Because 88% of companies are having trouble hiring.

One percent are having trouble retaining the people they are hiring. So where are you going to go? BPO service offshore, near shore? Where are you going to find the physical people that are ready to work?

Scott Hamilton (35:15)

Another business opportunity, just because we're brainstorming new business opportunities is because you're going to have two or three of these or multiple, and most buyers tech infrastructure is different. Like nobody has the same tech stack. And there's 50 of these companies, at least the integration layer and level of effort and risk is pretty massive.

And there's no standardized way for these things to plug in. Sorry, I'll get to another one later. But yeah, it's a big headwind to deploying multiple of these solutions fairly quickly and then be able to flip them out to optimize over time when the lift is what it is today.

John Bedard (36:07)

We need to create that standardization, Scott. After this call, we're going to create that business. There you go.

Adam Parks (36:15)

I got another one for you. How about the bot to bot communications? Because the debt settlement industry and the consumers are also deploying bots and in a rapid pace. And so as that starts to happen, and I'm sure that the consumer, I'm to throw air quotes and call them the consumer advocate attorneys are also going to start using bots with the intention of trying to trip up your live collector or your AI agent to get them to say something out of line so that they can use that against you. 

But how are you going to start to identify, especially as these bots get better and better and we're working on the same technology on our side to provide a better customer experience, but at the same time, the attackers are trying to work on that same technology for a much more nefarious purpose.

Scott Hamilton (37:00)

There's an after hour session where we'll invest in.

Adam Parks (37:02)

I started having these conversations at RMAI because the bot to bot conversation I think is a real risk level. But as we talk about the AI voice bots and we talk about the human technology, do you think that the voice AI bots are more predictable than the humans or is it just different? Are we comparing apples and Mac trucks?

Scott Hamilton (37:06)

Yeah. Predictable in all regards?

Adam Parks (37:29)

I, in regards to creating that customer experience, the responses that are going to come, I don't think we're, I have yet to come across anybody in this industry who's actually had one of these voice AI bots hallucinate. So yes, that's a reality. And we've all seen the, chat GPT lawyer situations and we've seen the rise in pro se litigants from a consumer perspective using chat GPT or other models to respond and write legal responses. 

But do you think that we're at the at an enterprise level where we're actually able to create a more predictable experience for these consumers? Or do you feel like it's just so different from the human interactions that we can't even compare these two things?

Scott Hamilton (38:15)

I think you definitely can. You can measure customer sat, you can measure sentiment, all the KPIs still apply. I think by definition, the automated solutions yield a more consistent experience. Now, is it a great experience? See the first 40 minutes of this conversation to build the right controls, but I'm of the camp that the tools in general are, they don't, they don't hallucinate. They are compliant. 

They'll do what you tell them to do. They are more efficient. They also optimize the incentive plan you give them. And maybe that might be another way to think about your agents do it, but they have a hard time doing it to an extreme. Like they have time and

The dialer limits them, but bots, can, the one debate that I've heard of that I think is in this space is the bot can max seven and seven every account to its max if you let it. And it will collect far more than a human because it just calls the heck out of everybody and texts every, you know, it just, and have another set of controls or guardrails that imagine how the incentive plan that you give it can be exceeded. 

Because that was my comment earlier around it can be more efficient, it can be highly compliant, it won't hallucinate, but is it more effective over time? Not just sort of being really efficient at hammering your portfolio more than the consumers would like it to be done.

Adam Parks (40:02)

The sprint versus the marathon question. Right.

John Bedard (40:06)

And whether or not it's more predictable than the human. I mean, it really depends on the human, right? Because over the last five, 10 years, our humans have gotten pretty darn good, you know? 

And so, you know, especially when it comes to compliance and things like that. And so I don't think it's ever gonna be less predictable than humans, but I think they're probably gonna be on par with most of our humans today in terms of predictability. The question is whether or not

Adam Parks (40:12)

True statement.

John Bedard (40:33)

you know we can really teach this technology to to do to have the secret sauce that our best humans have you know the answer is probably yes..

Adam Parks (40:45)

So going into our final 15 minutes here, I wanted to kind of bring this conversation full circle a little bit because the AI journey and the reason that we started this series was to be able to talk about going beyond the demo itself because we've all seen the fancy demos at the conference and somebody pulls out a phone and they play a recording for you, whatever the case may be. 

I know both of you have viewed a whole lot of different technology pieces that I'm not asking anyone to advocate for any particular vendor, but I think it's interesting to look at it from both of your compliance and operational perspectives in how do you separate a strong demo from a real solution? What does that process start to look like and how do we go beyond just listening to the demo that we heard at the conference and identifying the real solutions that can power our businesses going forward?

John Bedard (41:40)

I love that. What I do is I sort of dispense with the demos. I look, I've seen your stuff. I love it. You don't need to do me a demo. Give me the phone number. I want the phone number. I'm going to talk to it. Or give me the email address, which I can correspond with your tool. Or give me the website where I can chat with it. And so to me, that's where the rubber meets the road, which is.

I know it works perfectly all the time and it's great because that's what our demos are designed to do. But give me the phone number. I want to talk to it.

And that I think really separates the wheat from the chaff and really allows you to make a judgment about really operationalizing the technology. Can it really do what we need it to do? And give the phone number or the email address or the website for the chat or whatever the body is. Give that to your compliance folks and say, look, have at it for the next couple of days and see how you feel about it after you've interacted with it. To me, that's sort of the best litmus test.

Adam Parks (42:41)

I love your perspective, John. I challenged last year all of the voice AI vendors that were approaching me to let me interview their bot on a podcast. And I said, whoever is willing to do it, I will get on, I will do a podcast and we'll lead into it. I got one taker and I interviewed the bot and then they, cause I wasn't at RMAI last year or in 2025 and they actually had it, they had the demo plate like that.

John Bedard (42:54)

You've got zero takers. Okay, all right.

Adam Parks (43:08)

podcast playing in their booth because they were actually able to put me on with it and let me play with it a little bit. And I was really impressed, but that was kind of my approach as well was, all right, tell you what, hey, if it's so great, let me interview it and let's see how it goes. Well, it's not built for interviews. Well, I'll be a consumer. 

Like I know how to be a consumer up and at this business for 20 years. It was interesting to see how many organizations were like, no, no, no, no, no. Yeah, we're not. We can't do that.

John Bedard (43:24)

Great. Yep. Yeah.

Adam Parks (43:34)

How about you, Scott? You went through this whole project of evaluating pretty much every piece of AI technology in the space last year. How do you go beyond the demo?

Scott Hamilton (43:43)

Yeah, it was interesting. you're right. We all came, we all came back from RMAI last year and the community that I host, all saw those same demos and we're all pretty impressed, but they didn't know how to, RFP them. So we did, we crowdsourced an RFP, we issued it to 22, 13 filled it out. We judged them in like two or three, sort of four separated from the pack.

So that was one way. John's way was probably next. Where that group at least has gotten to is because there's so many and they've had six, nine, 12 months in order to do case studies and pilots that I'm now focused on show me the case study that passes three tests and

Like so far there's only like zero to one to maybe two that passed this test. So it's a forward thinking test is, ⁓ do you have production clients that look like me? Like boom, two thirds fall over in that test. Can you prove that it's compliant and more efficient than a human at some reasonable scale?

Most of those that pass the first, pass the second, but then show me how it's more effective over time. And that's the new bar. And if you're more efficient, more effective, you're compliant, like to me, the demo proves itself out.

The economics, I think the whole industry is trying to figure out how to price this stuff. So that's to be figured out later. It'll right size based on value, but that's how most are wrestling with. The other, sorry, the last one to throw in is how do I integrate with you? And what's that level of lift? 

Because some already have integrations with your CRM and some don't. And telephony and payments and portal integration and text integration, like that might be far less for some and far more for others. So that might be another consideration filter.

Adam Parks (46:07)

Very interesting approach and watching you crowd source that RFP and that whole process was really interesting for me. I learned a lot through participation and just kind of watching and talking with you throughout that process and how do you actively do that. And now I'm starting to see this, let's call it shift in the technology from an operational perspective. 

And what I mean by that is, three years ago 2023 we had like 64% or 67% of companies said they were never going to touch artificial intelligence. Last year was 24% 2025 it was like 7% of companies said that they weren't going to use AI and when we dissected that information even further for the to you report it was all under what's called 20 people very focused on medical but there was a very small subset that didn't have an interest in it so now I started thinking about it like the text messaging and email and that took a lot longer for us to adopt and for many regulatory reasons. But now pretty much every company has got the train tracks laid. They can send text messages and they can send emails. But the differentiator is in how they're actively deploying it and how they're orchestrating their communications across those channels. Do you think as we hit this saturation level of artificial intelligence, it's going to be less about

I have AI at my agency and more about that orchestration between the channels and the selection of the specific use cases that they're concentrating their time and energy to perfect.

Scott Hamilton (47:48)

That's everything in my book. The bot that calls the human or the text or whatever, that's the arm and the leg. The heart and soul and brain is the data aggregation analytics orchestration layer. And that's the next big question is, are you going to build that yourself?

John Bedard (47:51)

Yeah.

Scott Hamilton (48:15)

Pause before you say yes, because make sure you know what you're getting into. if pause before you say no, because you got to know what you're getting into. the tech is there. The tech is wildly effective. It's only going to get better. It's the data and the guardrails that you give it or allow it to build on itself is where all the value is going to come from.

Adam Parks (48:41)

I used to look at it as buy versus build, but now I think it's buy versus build versus rent. And I think we're seeing a bifurcation of the industry down those three channels as we look at this technology. But go ahead, John.

John Bedard (48:54)

I was going to say the same thing. We have this tool. The differentiators can be how we use it. How each of us use it. You go to Home Depot, you buy a shovel. Well, you can use that shovel to dig holes, or you can use that shovel to hammer a nail, or to maybe lay a brick, or to maybe get water from one place to another place. But its highest and best use is digging holes. And so how are we going to use this tool? I think a lot of people are going to experience buying that shovel and trying to lay a brick with it.

That's not effective, right? We gotta do it a different way. And maybe they're gonna find others out there, experts out there that say, hey, we're experts at this shovel, why don't you come to us and we'll sort of, we'll tell you where to dig and what to dig for and how deep to dig. We'll do all that for you. I mean, that's what's gonna happen.

Adam Parks (49:38)

And the sifter is to find the value and the goal within that dirt that you're digging. Because just, think the action itself is one piece of it. And then the orchestration to new value becomes another piece. And I think we can all agree that the consumers are becoming more comfortable with the AI technology, similar to the way that over the last 10 years, consumers have become significantly more comfortable with subscription services.

John Bedard (49:42)

That's right. That's right.

Adam Parks (50:04)

If you told me in 2005 that everything in my life would be a subscription service, I would have laughed at you. And now if you look at a credit card statement these days, it's 30, 40% subscription-based model for just about every, even cars now. heard Toyota's rolling out their cruise control will now have a subscription service associated with it. Maps and GM vehicles, just to be able to run navigation on your screen will have its own 1499 subscription service.

John Bedard (50:24)

Good grief.

Adam Parks (50:34)

And so the consumers have become more, I guess, comfortable with that process. And I think the same thing is going to happen from an AI communication standpoint. But then we also have the opportunities to empower the orchestration using these AI tools. And I think that's where the differentiation is going to be over the coming two years.

Scott Hamilton (50:55)

There's a last comment, at least for me is there's an interesting, another decision that creditors will need to make that will have impact on the downstream industry. Most creditors have far more data on their customers than any agency or debt buyer does. That's easy. They have the most, I think, to gain by leveraging this technology. If, and it's a huge if, if they master it, everybody down the line is gonna get a different subset of what they're getting today. 

But they are far bigger and more bureaucratic and slower than everybody down the line. So there's this window of opportunity predominantly in the agency space, to, there's like a, not a first mover, but a first and second mover advantage that if you can figure this out, not only can you better position yourself, but you can begin to offer that service to your creditor clients before they're ready, willing, and able to do it themselves. 

And they may choose not to, because they can build and test through you instead of build and test through them. And they've got underwriting and fraud projects that beat this every day. So it is an interesting debate that it is building on the creditor side of this arms race, but never people will get this, it will be more efficient and effective. They will figure out how to govern it. They will have multiple tools at their disposal to swap in and out. The question is, is who and how fast?

Adam Parks (52:38)

I was reading a McKinsey report the other day that was talking about how organization called high performing AI organizations are 400% more likely to have revamped the underlying processes and workflows in order to drive that value. As a reformed banker, do you think that that slow movement from a creditor standpoint is them reworking those workflows or is that where the real opportunity lies for the agency, agencies or the leveraging of the seventh use case style agency that have the AI focused workflows from the ground up because bolting AI onto an old traditional process I don't think is the future. 

It might give you lift today, but I don't think it's the long term competitive advantage unless we're reworking those workflows. having visibility on both sides of the equation, what do you think comes next?

Scott Hamilton (53:35)

Yeah, 100%. There are a few, very few, like less than five of the largest banks that are doing what you would do if it was your own money in your garage. Like you'd build a parallel business, start up, round up, destroymyoldone.com sort of thing.

Couple agencies are thinking about doing the same thing, but you're exactly right. It's the latter. Virtually everybody is bolting something onto their existing infrastructure. At a minimum to learn to the topic we had at the very beginning of the conversation, that's all goodness. The big debate comes on Domino 2 or Domino 3. 

Before you expand that bolt on appendage, sort of strategy over time, you should be confronted with the, know, where's this thing going? And is my investment in the next six months better pointed toward more integrations with a third and fourth vendor or retooling some of the underlying architecture?

Adam Parks (54:45)

Interesting point of view and as we come into our final moments here gentlemen any final thoughts that we didn't get into our discussion today that you had in the back of your mind as we were preparing?

I could go down the rabbit hole with you guys all day long and start a whole other conversation, but I'm not going to get to it in two minutes.

John Bedard (55:03)

Keep going.

Scott Hamilton (55:06)

Yeah.

John Bedard (55:08)

Yeah, I think I would just sort of repeat my this notion about you don't have to eat the whole elephant in one bite, right? Baby steps. It's coming whether we like it or not. We are going to be using it in the future whether we like it or not. Let's learn to like it and let's start taking, you know, tiny spoonfuls of it because it really is going to make our businesses and our lives better.

Scott Hamilton (55:32)

I would agree with that with one add in caveat, take two or three spoonfuls and plan to stop and pause and back way up and answer the strategic question once you get your brain around what this thing is and that it's going to be good things for you. 

Because if you get too deep with four vendors on seven platforms, you will have tech debt that will be yet another hand wind to back up and retool at the base level. So yes, learn, yes, pilot, but step back and either invest in more, a broader set of use cases where you give like a placement file to a vendor and you give them latitude to do everything or you really look at your underlying tech infrastructure because you're going to do more of a to the brain heart and soul and brain to and the data orchestration layer. Like make those strategic decisions after you've had a few spoonfuls of learning.

Adam Parks (56:42)

Going backwards is hard with artificial intelligence, so either take the time to set it up correctly or be prepared to blow up your test or bring somebody in who knows how to help you set that up from day one and get organized, keep the data safe and test on things that don't matter. 

I've spent the last two days in open claw trying to build different pieces of technology with the full expectation that I'm going to delete that entire instance and start over again, and the first few days as a learning experience and then I can better understand how to set up my infrastructure for the future and be able to build something that will be commercially usable.

So gentlemen, thank you so much for joining me today. This was an absolutely fantastic discussion. Please stick around for a minute after we end the live stream, just so I can make sure I have all the videos uploaded. 

For those of you that are watching today, we really appreciate your time and attention. We'll be publishing the replay next Friday to YouTube and again here on LinkedIn, or you can share it with your friends and colleagues directly here from LinkedIn, because I think there was a lot of really great nuggets of information in this conversation today coming from these gentlemen's very interesting perspectives. So again, thank you everybody for your participation and thank you guys. I can't thank you enough for participating.

Scott Hamilton (58:04)

Nice job. 

John Bedard (58:04)

Thank you. 

Adam Parks (58:06)

Thanks everyone, we'll see you again soon. Bye.

Why AI in Debt Collection Matters Right Now

AI in debt collection is forcing the industry to confront a hard truth: a written policy is not the same thing as operational reality. 

That’s really the heart of this conversation, and it’s one of the key reasons this discussion brings together John Bedard of Bedard Law Group and Scott Hamilton of ARM Tech Advisors for this episode.

Across the industry, a consistent pattern continues to emerge across collection operations, compliance teams, data providers, and technology vendors. 

Everybody says they want innovation. 

Everybody says they want better consumer experiences. 

Everybody says they want to stay compliant. 

But once you get past the polished demo and start looking at what’s actually happening inside the workflow, the real question becomes pretty obvious: do your people, processes, and platforms actually behave the way your policies say they should?

That’s where this episode becomes especially relevant for industry leaders.

John brings a legal and compliance perspective that’s grounded in what happens when things go sideways. Scott brings the operator and transformation lens—how teams pilot, evaluate, tune, and scale these systems in the real world. Together, those perspectives keep the conversation focused on what collections leaders can actually do with this information.

This isn’t a hype conversation about whether AI is coming. It’s already here. The better question is whether you’re adopting it in a way that improves collections compliance, strengthens customer experience, and creates long-term operational value.

If you work in debt collection, financial services, receivables management, or recovery strategy, this episode matters because it goes beyond theory. We talk about where compliance gaps are actually forming, why “we don’t do that” is one of the riskiest phrases in business, how to test AI voice bots the right way, and what separates a slick demo from a real solution.

It’s the kind of conversation the market needs more of.

AI in Debt Collection Compliance Starts With the Real Gap

“The gap I’m seeing now is that the programmers creating these systems are disconnected from the operators who understand what the goal should be .”

That quote from John gets right to the operational problem.

A lot of collections teams still think compliance breakdowns begin when a bot says the wrong thing. Sometimes that’s true. But more often, the real issue starts earlier—during design, configuration, handoff, and oversight. If the builder, the operator, and the compliance team are not aligned, the system may technically function while still creating risk.

This is where a lot of leadership teams get tripped up:

  • They assume policy documentation equals operational control.
  • They treat AI rollout as a tech project instead of a workflow project.
  • They underestimate the importance of feedback loops.
  • They forget that customer experience failures often become compliance failures later.
  • They don’t always define who owns the guardrails once a pilot goes live.

That matters because AI in debt collection doesn’t just expose communication issues. It exposes organizational misalignment.

If your policy says one thing and your workflow does another, AI won’t hide that gap—it will magnify it.

AI Voice Bots and Digital Collections Transformation Need Baby Steps

“Baby steps. Let's take baby steps and let's try it on the inbound side first.”

This point cuts through a lot of the noise.

There’s a real temptation in digital collections transformation to jump straight to the biggest, flashiest use case. Outbound voice. Full workflow orchestration. End-to-end automation. But in practice, the smarter path is usually more controlled. Start with inbound. Start after hours. Start with a lower-risk segment. Learn what breaks, learn what works, and build trust inside the organization.

Here’s how that translates into practical strategy:

  • Pilot where risk is measurable.
  • Keep the scope narrow enough to observe behavior clearly.
  • Include compliance in design, not just review.
  • Treat tuning and testing as part of implementation, not post-launch cleanup.
  • Use the pilot to build internal AI trust.

That’s not a small point. In a lot of organizations, the real bottleneck isn't technology. It’s internal confidence. When teams can see the workflow, hear the conversations, evaluate the edge cases, and monitor performance, adoption becomes much more rational.

How to Evaluate AI in Debt Collection Without Falling for the Demo

“Give me the phone number. I want the phone number. I'm going to talk about it.”

This was one of the most practical lines in the episode.

The industry has seen enough demos to know that a polished presentation is not the same thing as a production-ready solution. The real test is interaction. Can your team actually engage with the bot? Can compliance push on it? Can operations challenge it? Can leadership hear how it handles real-world friction?

A practical evaluation checklist coming out of this conversation looks like this:

  • Ask for a real environment, not just a demo 
  • Test inbound before pushing into higher-risk channels 
  • Review compliance logic and escalation paths 
  • Stress-test customer exit options and human handoff 
  • Ask for proof of efficiency, compliance, and long-term effectiveness 
  • Evaluate integration lift across CRM, telephony, payments, and messaging 
  • Plan for vendor redundancy and platform exit risk 
  • Look for orchestration capability, not just a single-channel feature

Industry Trends: AI in Debt Collection

One of the most important industry shifts is the move from “Do we have AI?” to “How well are we orchestrating AI across the workflow?” That’s where Scott made an especially important point: the visible bot is only part of the value. The deeper advantage is in the data aggregation, analytics, and orchestration layer behind it.

That’s a big deal for collections leaders because it changes the competitive question. Over the next phase of adoption, the winners probably won’t be the organizations with the loudest AI messaging. They’ll be the ones that build better guardrails, better testing discipline, better workflow design, and better channel orchestration.

Key Moments from This Episode

00:00 – Introduction to John Bedard and Scott Hamilton
07:40 – Where compliance gaps form (policy vs operations disconnect)
16:45 – Using AI to detect compliance gaps and CX risks
26:30 – Customer experience risks and guardrail challenges
32:40 – Platform exit risks and consumer experience
42:40 – Demo vs real solution: how to evaluate vendors
57:10 – Final advice: baby steps, strategy, and avoiding tech debt

AI in Debt Collection: Actionable Compliance & Deployment Tips 

  • Start with inbound or lower-risk use cases.
  • Put compliance in the room before deployment, not after.
  • Define human handoff and platform exit clearly.
  • Audit for customer experience, not just script adherence.
  • Ask vendors for real testing access.
  • Measure effectiveness over time, not just launch metrics.
  • Plan for integration complexity early.
  • Build strategy after learning, not before it.

FAQs on AI in Debt Collection

Q1: What is AI in debt collection?
A: AI in debt collection refers to using machine learning, automation, voice bots, chatbots, and decisioning tools to support recovery workflows, consumer communications, prioritization, and oversight. The real value comes when it improves both consistency and strategy.

Q2: Does AI reduce compliance risk in collections?
A: It can, but only when it is designed, tested, and monitored correctly. AI can reduce variability and improve documentation, but weak guardrails or poor oversight can create new risks just as quickly.

Q3: How should companies test AI voice bots?
A: Start small, use controlled environments, and let compliance and operations pressure-test the experience. A strong demo is not enough. Real interaction tells you much more than a scripted showcase ever will.

Q4: What should leaders evaluate in an AI partner?
A: Look at production results, compliance design, integration lift, customer experience handling, and long-term effectiveness. Also, ask how easy it is to exit, replace, or challenge the solution later.

Ready to Strengthen Your AI in Debt Collection Strategy?

If this is a topic your team is actively wrestling with, this episode is worth watching through three lenses: risk, workflow, and leadership. 

Watch the full conversation and explore more such insights at ReceivablesInfo.com, and follow the discussion on the Receivables Info YouTube channel.

The takeaway is simple: the organizations that win with AI in debt collection won’t be the ones that move the fastest. They’ll be the ones that learn the fastest, test the smartest, and build the strongest bridge between policy and practice.

About Company

Bedard Law Group, P.C.

Bedard Law Group, P.C. is a full-service law firm dedicated to the credit and collections industry. Established in 2009, the firm provides strategic legal guidance, compliance support, and litigation defense, delivering practical solutions and strong value to its clients. Services include defense litigation, compliance consulting, collection letter review, on-site audits, nationwide litigation management, corporate counsel, policy development, and BLG Insight speech analytics.

ARM Tech Advisors

ARM Tech Advisors is a strategic advisory firm focused on the collections and financial services industry. The firm helps organizations accelerate transformation through tailored strategies, process redesign, vendor selection, and operational optimization. Leveraging industry expertise and insights gathered from leading firms, ARM Tech Advisors delivers practical, data-driven solutions that improve performance while minimizing execution risk.

About Guest

John Bedard

John Bedard is an AV-rated attorney and nationally recognized expert in the FDCPA and FCRA. He advises organizations across the credit and collections industry and serves as counsel to leading trade associations, including the Georgia Collectors Association. A former ACA International Board member, he is recognized as one of the Top 50 Most Influential People in collections by Collection Advisor magazine.

Scott Hamilton

Scott Hamilton brings over 30 years of experience in collections, leading policy, process, and technology transformations across banks and organizations of all sizes. He takes a highly consultative approach, is continuously learning, and is passionate about sharing insights that help drive meaningful, practical change.