In this must-watch episode of The AI Hub, host Adam Parks sits down with compliance powerhouse Sara Woggerman, President of ARM Compliance Business Solutions, to break down what every debt collection professional needs to know about writing and enforcing AI policies.

Listen to Your Favorite Podcasts

Adam Parks (00:01.246)

Hello everybody, Adam Parks here with an episode of the AI Hub. And this is quickly becoming my favorite podcast, because I’m really starting to get into artificial intelligence and I’m learning just so much from our guests. So today I’ve got Sarah Wagerman with ARM Compliance Business Solutions here to talk to us about actually building your policies and procedures around.

the use of artificial intelligence within your organization. But before we jump into that, let’s hear from our sponsor, Latitude by Genesis.

Adam Parks (00:37.33)

All right, so Sarah, very excited to have you here chatting with me today. I know that you are a staple across the industry and a frequent speaker at conferences, but for anyone who has not been as lucky as me to get to become your friend over the years, could you tell everyone a little bit about yourself and how you got to the seat that you’re in today?

Sara Woggerman (00:56.056)

course, thanks for having me. I’m excited to be here today, Adam. This has been one of my favorite talking points recently as well. So I’m glad, you know, just delighted to dive right in with you. I am the president and owner of ARM Compliance Business Solutions. We are a woman-run consultancy business. We primarily focus on the compliance aspects of the, you know, the arm industry for, and even with creditors, right?

Ultimately, we want to create a partnership with our clients that advances their business, which is why I talk about things like AI, right? So maybe this will help lead us into our discussion. But I become very alarmed when I hear things like, we’re saying no AI at all, right? Like, why would you do that? So yes, we need to think about this through a compliance lens. But do not stop yourself from innovating.

We are in a drastically changing technology time in history and if you wait too long, you’re gonna miss the boat.

Adam Parks (02:04.754)

It’s kind of like not using the internet in the 90s.

Sara Woggerman (02:07.486)

Right. Yeah, but it’s it’s it’s advancing so much faster than like the internet boom, right? I was just reading an article about that the other day, you know, you look at the 90s and you look at what’s happening now. It’s happening at such a faster pace. You don’t have time to catch up. I mean, you really don’t.

Adam Parks (02:25.576)

I was guilty of being one of those people who said, okay, we’re not using AI. And I put out a memo to the entire organization and said, hey, if you’re writing content using AI and submitting it, like that’s not your work. And I came to a realization and I wanna say it was between DCS and NCBA in New Orleans last year, just in different conversations that I was having and people were.

Sara Woggerman (02:29.55)

Mm-hmm. Mm-hmm.

Adam Parks (02:47.7)

kind of expecting me to be on the cutting edge of AI and I really didn’t have a lot of great answers other than some of the playing around I had done offline using llama and building my own models, really just kind of nerding out for the fun of it. And that’s when I realized that, you know, maybe it’s okay because it felt like at first it was more of a problem and then it became less of a…

less of a threat that we’re using these tool sets and more of an expectation that we were using those tool sets and it kind of changed my entire mind frame to where I dug in and said, okay, come November, December, I spent about 200 hours prompt engineering, trying to learn and understand and figure out how to produce my podcasts, for example, using artificial intelligence and to create the short videos and to do all the things that I needed to do as an organization. But now I’m at the point of kind of testing.

Sara Woggerman (03:31.746)

Mm-hmm.

Adam Parks (03:40.145)

And the process that I took, Sarah, was, I don’t know if it’s unique or different, but I took 10 people with 10 different use cases across the organization. And those 10 different use cases became my AI team for us to, let’s say, pioneer each one of those individual use cases. But now we’re getting to the point of starting to be ready to roll some of this out. And I got to get some documentation behind it, because even as a marketing and news organization, I…

rely very heavily on policies and procedures. And maybe that’s because I’m a reformed compliance professional, you know, myself, and I’ve maybe that’s my safety blanket is having those policies and procedures in place. But with the rapid adoption of this technology, how are we going to get documentation in place for this to make sense?

Sara Woggerman (04:10.734)

you.

Mm-hmm.

Sara Woggerman (04:29.25)

You know, that has been a challenge and I’ve talked about this with a lot of my different peer groups, right, that I’m a part of and because they’re like, how do you put the guardrails in, right? And what are those guardrails? What if it changes? And so I do think that your policies need to be nimble, right? They need to be somewhat flexible, but give people really clear direction about, you know,

When is it acceptable to use and when is it not acceptable to use, right? These AI tools, much to what you just said, I mean, I had similar concerns when people, was like, I don’t know that I believe ChatGPT, right? Like, I mean, how do I know? And then you hear these, it’s all lies. They’re just trying to come and take your job. No, no, no. It should enhance our jobs, right? We should be better, more effective.

Adam Parks (05:13.544)

Don’t believe it’s lies. Don’t believe it’s lies.

Sara Woggerman (05:24.862)

as a result of this technology. And so I also had somewhat of a similar journey, right? Where I was like, I don’t know if I trust it. And then I just, yeah, you just kind of like dip your toe in. And you’re like, whoa. And then like suddenly you’re like, it learned so quickly. And it learns, and when you learn how to sort of prompt these tools in the right way, and you are using it as.

an addition to your expertise or to clarify something really important that you’re trying to say. So for me, know, communication is so key to my organization, right? What is very clear to me might not be clear to the client that I am producing something for. So I will often say, all right, just look at this paragraph. How could I word it better? Right? Or is this clear enough? And

And it gives me really constructive feedback back. Every so often, I’m like, no. But I’m like, OK, this is what I need to do to make this more specific to this particular client. So when you think about the documentation first is, what do I want to allow and what is not allowed? So there’s this big concern about obviously putting proprietary information.

out into one of these public accessible LLMs that is going to share data, right? We heard horror stories about that early on. And you can purchase enterprise level tools that you can control, right? Which is what you did, Adam. And so you can do that in a way that you can control it. Just like you can whitelist certain pages on the internet.

There’s certain things you can control at the enterprise level. you know, again, I think telling people, you can’t do this is the wrong answer because they’re going to do it anyways. That’s the thing. it’s, it’s why, why are there ashtrays in the bathroom? Yeah. Why are ashtrays in the bathroom on an airplane? Right. Because if someone does light up a cigarette, they’d rather than put it out in the ashtray and not blow up the whole plane. Right. So

Adam Parks (07:31.206)

It’s better to control it, right? It’s better to put it into an enterprise level model is the conclusion I came to.

Sara Woggerman (07:47.128)

Put in the guardrails. I always think that’s hilarious, so I love that metaphor. So if you put in the guardrails and you tell them how to use it, and then bring ideas. Here’s the other thing I would say. I would have your policies in a way that says, if you think of ways to use it outside of these controls, outside of these parameters, bring it to this committee. Create.

an AI committee that just says, all right, this is how someone wants to use it. Let’s figure out how to do that and if that adds any additional risk to our organization. Or does it actually make us more effective? So maybe some of the ways we could talk today, Adam, are like some of the ways that some people are using AI today. You can help it with complaint responses. So you’re being very empathetic, right?

You’re cleaning up those complaint responses, dispute responses. There’s conversational AI that has come amazingly far in the last 12 months.

Adam Parks (08:57.05)

I got to interview one of the Kompato bots on my other podcast, Receivables Podcast. We’ll put a link to that particular episode down below because for me that was a super cool experience.

Sara Woggerman (09:00.502)

Yeah.

Sara Woggerman (09:09.922)

Yes, and that particular bot has already advanced from that call so much more. Because they just keep getting better, like in the little tweaks that you can make on those. So if you think about the consumer experience, can do that verbally, can do that via text. You can do that via email if somebody actually wants to communicate with you that way. You can do that via chat.

your touch points are now expanded in a way where 90 % I mean, ideally you would think 90 % of the consumer interactions can be being completed by one of these bots, right? And not that there’s no human interaction, right? Because there will be need for human oversight. There will be need for

certain issues where people just maybe don’t want to talk to the bot or things that get maybe overly complex. You you need to have that off-ramp if you need it, if a consumer specifically needs it. you know, humans are going to have to have, yeah.

Adam Parks (10:19.912)

Let’s touch on the human oversight for a minute because in episode two, I was talking with Tim Collins from InDebted and we were talking about how he was using it to review contracts. But he said, look, my fiduciary responsibility is still to review these contracts. But he’s been training this model for

Sara Woggerman (10:37.549)

Yes.

Adam Parks (10:40.948)

let’s call it four years, to learn about the specific language that he likes to use and will basically use it as a starting point. I’m also finding that in my world to be

Sara Woggerman (10:49.944)

you

Adam Parks (10:52.464)

another really valuable piece in terms of doing research. So you know, since writing the TransUnion Debt Collection Industry Report for 2024, I’ve been asked to write other research papers. And it’s something that I have, I guess, a passion for really to enjoy nerding out on a nice long document and being able to go and actually research these things and to pull all this information together, not just from, let’s say a survey that we’ve conducted, but to find additional information from the Federal Reserve to have it help me outline documentation. So

Sara Woggerman (10:56.632)

Yes.

Sara Woggerman (11:05.42)

Mm-hmm. Yeah.

Adam Parks (11:22.358)

so that I’ve got a more clarified outline that may be a little bit more obvious to, let’s say, general users versus somebody who’s gonna be overly detailed like me in going through it. So it feels like, yeah, there’s all these different use cases that we can do for the debt collection industry. There’s voice, there’s data analytics, and there’s all of these really different use cases, and we did a webinar on the six use cases, but…

Sara Woggerman (11:22.68)

Yes.

Adam Parks (11:48.819)

How do you think is a good starting point for these groups to start getting comfortable with the usage of this type of technology? Should they be focusing on a particular use case to start with? how do you suggest that they kind of take that first step on the journey of a thousand miles?

Sara Woggerman (12:08.5)

I think it’s all going to be dependent on their risk tolerance. But I do think one place is probably a good starting point. I wouldn’t be, you know, I could see a scenario, let’s just think of maybe a debt buyer or a creditor. Like maybe we dip our toe in an operational use case and a compliance use case, right? It could also be departmental, right? So maybe we…

help it to get through our disputes faster over here. And maybe we use it as conversational text or chatting. Maybe we’re not comfortable with voice because we want to make sure we have the TCPA controls in place, whatever. I think that there are ways to dip your toes in to all these pieces. Legal, Tim’s example is another great one. Contract reviews. Debt buyers have to do this, right?

you’re going to get lots of contract reviews that you need to do. But also, where there’s potentially a gap is, what about all the contracts that the consumer signed, all the underwriting documents, right, in the accounts that they purchase? What are you allowed to do and not do based on the terms and conditions of that particular consumer? Being able to ingest that information at scale,

is a game changer from how to leverage technology, right, and do it safely. So I think there’s lots of areas to think about, like how to ingest all of this. So pick an area, pick two areas where you think I could potentially have the most impact. Or the other route is where you think that it’s not consumer facing, so maybe I dip my toe in here to get comfortable.

and then move to consumer facing, right? It kind of depends on how you think about risk and what you’re, what are you trying to achieve with the AI? Are you trying to achieve quicker results in potential collections and lower your cost to collect? Or are you trying to offset or not add human capital to your back office support functions? So that’s kind of a business question, right?

Adam Parks (14:31.39)

So one of the things that I’ve attacked has been the data analytics aspect of it because as debt buyers, as agencies, as law firms, as debt collection professionals, we are just.

Sara Woggerman (14:36.323)

Mm-hmm.

Adam Parks (14:43.93)

piled on with available data, whether that be, you know, payment trends, user behaviors on our websites or portals, like there’s all of this data that we can’t understand. And I look at it similar to any debt portfolio that we want to evaluate. If I want to evaluate a debt portfolio, I can’t look at each individual account as a human and get an understanding. And so for since the beginning of debt buying time, we’ve been using stratification tools to break down and try and generalize

Sara Woggerman (14:51.651)

Yes.

Sara Woggerman (14:55.214)

You

Sara Woggerman (15:06.083)

right.

Adam Parks (15:13.814)

some of the key aspects that we’ve found to be value driving criteria within a portfolio of accounts. Well, what are we capable of doing now when we can better analyze accounts on a line by line basis?

Sara Woggerman (15:20.012)

Mm-hmm.

Adam Parks (15:30.034)

I think that’s a really interesting approach to it to where it’s not consumer facing, it doesn’t work, you know, there’s not really a lot of risk in you evaluating that technology as long as you’re doing it on an enterprise level and you’re keeping your data right like in your box. There’s some really interesting opportunities there for you to better understand the portfolios that you’re purchasing or how the consumers are behaving or how people are behaving on your website.

Sara Woggerman (15:30.594)

Yeah.

Sara Woggerman (15:38.126)

Pass.

Sara Woggerman (15:47.16)

Yes.

Sara Woggerman (15:50.478)

Mm-hmm.

Adam Parks (15:58.229)

That’s a really big opportunity because you’ve got so much data and it’s already there. All you have to do is ask it about this information. What do you see? What kind of trends should I be understanding? And it doesn’t take a, know, a ChatGPT engineer to run that kind of an analysis because now you can ask it in plain language.

Sara Woggerman (16:06.499)

Right.

Sara Woggerman (16:12.835)

Yeah.

Sara Woggerman (16:20.184)

Well, yeah, and what if you could assign a score that is not just credit based, but is behavioral score. Yes, yes, that’s exactly right. the behavior, and what does that look like on a line by line basis? And what does that do to the pricing structure? Right? I mean, if we can make that happen with AI, that’s a game changer for the buying industry.

Adam Parks (16:25.62)

behavioral scoring.

Sara Woggerman (16:49.75)

And I mean, yes, we have. I agree with you. I think that’s been the most frustrating thing about being in this industry for almost 20 years is I’m like, we have so much information and yet somehow we can’t figure out how to leverage that data in a way that like drives our own behaviors or our own success and.

It’s just it’s been my number one biggest friend like it should be driving our decisions and yet I often see that well we get here, but we can’t get here or you know it’s like what? Or that an unstructured data, yes

Adam Parks (17:30.207)

silo data. That’s really been the issue, right? Until the last probably two years. Yeah, structured versus unstructured data and then having the inability to actually bring the data sets together. mean, data lakes and real data warehouses in our industry have been few and far between for lack of better language. Now it’s

Sara Woggerman (17:44.12)

Yes.

Sara Woggerman (17:52.59)

Mm-hmm.

Adam Parks (17:53.973)

Like now you can plug into a Snowflake. you’re hiring data scientists. Like data scientist wasn’t a role at most debt buying companies five years ago.

Sara Woggerman (18:03.726)

correct.

Adam Parks (18:03.76)

Even two years ago, I would say it was few and far between. Now I want to say it’s becoming almost a standard to have an analytics team. Even at the marketing and news firm, we have a data intelligence team who is responsible for collecting, analyzing and providing us with constructive insights because a whole bunch of data doesn’t do anything unless you can extract actionable insights from it and turn it into an you can operationalize the information at hand. So how much do you trust

Sara Woggerman (18:13.432)

Yes.

Sara Woggerman (18:27.606)

Right. Right.

Mm-hmm.

Adam Parks (18:33.684)

the information at hand and how many questions are you going to ask in order to validate the responses or validate the conclusions that are drawn from that data using an AI model?

Sara Woggerman (18:35.65)

Yes.

Sara Woggerman (18:46.818)

Yes, yeah.

Adam Parks (18:47.688)

Feels like something that should be in our policy though, right? Like the validation of how you’re doing it, that human oversight feels like it would be a really big part of the policies too.

Sara Woggerman (18:50.242)

Yes.

Sara Woggerman (18:57.27)

It has to be. that means that the people on your team today might have to change or they’re going to have to up their game. So I’m an auditor for RMAI. I’m very involved with RMAI certification program. Their latest version talks about the person that you’ve designated as your AI person needs to have a period of so much professional training a year.

And I think that’s an interesting requirement that they put in there, right? Because we can’t be the… It’s going to be… And that’s in the redline version. I don’t believe they have not, as of us taping this, they haven’t released the final, you know, approved version of 13.0. But I anticipate it will be some form of that. But I also think…

Adam Parks (19:33.62)

It’s a hard one. What is professional training?

Sara Woggerman (19:57.132)

that’s, we can’t just have the blind leading the blind on this either, right? So, you know, it’s like, we gotta have, we’ve gotta have somebody who’s like continually pushing themselves to understand how these models work and how you can leverage these things and then what human oversight needs to look like. So.

What does that look like? So I had somebody recently say something to the, well, I mean, compliance will not be as important if we build all these AI tools. And I was like, are you kidding me? It’s going to become more important. Because there will be new challenges that arise as a result of this. We have never once innovated and things didn’t get, you know, we have to evolve with it, right? Your compliance people have to evolve with it.

If your compliance team and your IT team or your data security team and your data scientist team are working together on this, which they should be, let’s say you’re trying to evaluate for potential unfair, deceptive, and abusive acts and practices, or potential discrimination and bias, that is the biggest concern on the regulator’s mind, is are you inadvertently

Adam Parks (21:02.643)

Mm.

Sara Woggerman (21:21.158)

LLMs learn from human behavior, is inherently biased. there’s lots.

Adam Parks (21:26.376)

We go back to the Goldman Sachs example with the Apple card and you know, we talked about that I think in episode two, that’s definitely something that can happen. But how do you so I’ve heard the the argument of the you need an AI model to watch your AI model.

And although that may be true to some level so that multiple models are looking at the same outputs, but if you’re not going to run a human statistical analysis on the back end, I think you’re setting yourself up for disaster.

Sara Woggerman (21:56.67)

agreed. That is where you need to have, yes, you need to have some analysis on the back end of that to look at, of our population was the decision making altered and could that be derived by something that could be considered a discriminatory practice, right? And let’s remember folks, when the regulators come in, now at the federal level right now,

You know, we’ve got a couple of years probably until that really becomes a concern. But remember, if the pendulum swings the other way at the federal side, they go back five, six, seven years, right? So you want to be looking at this type of stuff now. State AGs are very concerned about this. There are 81 AI bills. As of yesterday, the army is tracking that could negatively impact our industry. So.

Adam Parks (22:35.358)

Mm-hmm.

Sara Woggerman (22:52.45)

you know, the states are going to have different interpretations and things like that. But this is where it’s going to bite us, right? Is if we don’t do this look back, because we need to be able to evidence that we’re not being discriminatory and that the machine is being neutral and that we don’t have the disparate impact situation, right? Where something appears facially neutral.

but it had an unintended consequence.

Adam Parks (23:23.25)

It’s going to require math because you’re going to end up with all of these state regulators that don’t understand the technology, don’t understand the industry, and to be perfectly frank, don’t care. And I think that’s the most dangerous part because they just don’t care. They wrap you into financial services or they put

Sara Woggerman (23:36.778)

Right? Mm-hmm. Yes.

Adam Parks (23:42.463)

They think you’re the boogeyman. And when they think you’re the boogeyman, unless you’ve got some hard numbers that you can use to defend yourself in that conversation, you’re going to be dealing with an adversary, not a partnership. I feel like from a federal perspective, the CFPB was probably in a better position to try and understand some of this technology and to go and roll it out. I’m not saying that regulation through enforcement was the way to go or legislation through enforcement was the way to go. Right. Like I’ve got other podcasts on that, but

Sara Woggerman (24:07.374)

Correct.

Sara Woggerman (24:12.098)

Mm-hmm.

Adam Parks (24:12.596)

I think from this perspective, like if we’re not running a human oversight mathematical analysis of this type of technology on the backside, and if that analysis is not actually baked into our policies and procedures, because just because just like everything else, when you know, when we went through it in 2012 through 2018 time period, as we were going through all of that, if it’s not documented, it didn’t happen. And I don’t want people to lose sight of that.

Sara Woggerman (24:37.678)

correct.

Adam Parks (24:39.974)

reality because we’re moving forward or because the CFPB is weaker than it was last year. There’s still a pretty significant threat.

Sara Woggerman (24:46.838)

I agree. There is and you’ve got about 15 AGs you need to be worried about today. Just for context at the end of the line part of the RMAI legislative committee and the end of last year there was a little over 100 bills. It’s almost 400 bills as of yesterday that are negative. I so that’s the increase in just state legislative activity is huge.

And the top issues are around data privacy, AI, that we’re talking about today. And so when you get these 50 different interpretations of what even AI is, if there’s a private right of action in there, right, which there’s a bill in New Mexico proposing and that’s very concerning, we got to make sure that we stay on top of these things. We need to make sure that we’re kind of preemptively

knowing where things are going. So for example, if you’re using AI, let’s just disclose it guys. Like let’s not hide that, right? Because guess what? It is so key, right? How are you using the data? What are you using it for? You know, this is becoming a conversation between collection agencies and creditors.

Adam Parks (25:54.037)

Transparency, think, is the key to all of this, right?

Sara Woggerman (26:10.47)

And being really transparent about this is the limited scope of what we’re using. It might expand, but we’ll talk to you about it beforehand. And they might even want to hear reports on what that’s doing for your organization, because they’re also learning. Some of the banks are really scared of it, while others are embracing it fully. So there’s a split there as well. But transparency and reporting on this.

is super critical. your policy documents need to essentially outline, you know, what is okay to use, what is not okay to use, you know, AI tools for have a committee for reporting any issues people might identify, right? So that’s part of your training, you know, with your committee is when should I elevate something that might look funny? How to elevate those things.

And that way you’re getting ahead of those issues before a regulator determines they’re an issue, right? So we need to think about things from, all right, am I using this tool to, what is my purpose of using the tool? And then does this tool potentially cause consumer harm or bring risk to my organization?

organizationally from a data security perspective, from a compliance perspective, from consumer financial law perspective. And then make sure that you have oversight over any of those potential risk areas so that, you know, and reporting and document things with data again. we’re going to have to tell a story that’s going to look a little bit different. And what we’ve always wanted to be able to do

is be able to tell really good stories with data to the regulators because they tell a story about us using their data.

Adam Parks (28:09.268)

Regardless of whether or not the data is accurate, recent or…

Sara Woggerman (28:15.168)

Right, correct, right? Like it could be like something that happened five or six years ago and you’ve totally changed things since then, but they want to make an example out of something you did seven years ago. So we’ve seen that happen, right? And, and,

Adam Parks (28:28.948)

Well, the medical debt or the medical credit reporting is based on a 2012 report, anything like it’s over 10 years.

Sara Woggerman (28:34.21)

Correct, yes. Yes, right. And honestly, everything with the medical is a knee-jerk reaction to the pandemic, right? it’s, so we had all these weird things sort of happen and now we’ve got this knee-jerk reaction that looks like it might self-correct, we’ll see, but time will tell on that. But yeah.

Adam Parks (28:44.466)

Mm-hmm.

Adam Parks (28:59.092)

So I get another question for you though, because when it comes to, when I saw in the TransUnion survey results this year was that more companies are definitely starting to use AI.

Sara Woggerman (29:09.998)

Yes.

Adam Parks (29:10.29)

When we looked at, you know, larger companies, let’s say over 100,000 accounts under management, we found that even there in the larger organizations, the majority of the AI exploration, let’s call it is happening with third party vendors versus trying to develop AI internally. Now there’s exceptions to that there’s organizations that are digital first and driving AI and all of that. But let’s look at the majority of the industry. So if

Sara Woggerman (29:30.456)

Mm-hmm.

Adam Parks (29:37.468)

Is there some criteria that a debt buyer or an agency, law firm should be looking at when they’re evaluating these potential AI vendors to help bring that comfort level and how does that change their vendor management policies?

Sara Woggerman (29:47.49)

Yes.

Sara Woggerman (29:52.866)

Yes, so their vendor management oversight needs to entail a couple of things. And the thing that they really need to understand is, if you’re using a third party vendor, how is that data being used? So there are some vendors out there today, and obviously I won’t name any names, but their whole business model is to use as much as your data possible to build this giant machine. You might not be comfortable with that, right?

You might say, only want my data to be built off my data and not to help Adam Park’s company, right? And that you need to understand that going in. How is that data being used? Is it being sold elsewhere? Is it being used to build something bigger, better, whatever in the minds of the vendor? So understanding where that data is going to go.

Privacy impact assessments are about to become the biggest thing you’re going to have to do with vendors, right? Understanding and checking exactly where each data point is being mapped to because it’s not going to be enough. I don’t believe it’s going be enough to have a SOC 2, type 2. I don’t believe it’s going to be where literally is my data that I am supplying you.

what does it touch because the spider web that could become this data set is that’s the scary part right and so having presumptive controls contractually in place of where this could go and how it might be used you’re going to have to have some of that I’ve heard several lawyers talk about that and I think that’s a really good idea but also checking up we all know

Adam Parks (31:31.038)

Mm-hmm.

Adam Parks (31:49.533)

How do you audit to that requirement? You can put that requirement in your documentation, you can talk to the vendor about it, but how are you going to audit to that?

Sara Woggerman (31:51.542)

Yeah. You’re going to have to, you’re going to, by doing those privacy impact assessments, I think that’s going to be a big part of that process. So they’re going, you’re going to have to evidence where that piece of data went. So I think, and that might be a challenging, right? Because coders think in code, right? They don’t think about necessarily, well, it goes from here and then it becomes.

you know, this tokenized gobbledygook of numbers and whatever, right? And all the time, well, okay, but you have to be able to explain this to somebody who doesn’t understand that language, right? So you’re going to have to show me and prove to me where all that information went and that it’s being masked in a way or not being used in a way that could be nefarious, right?

Because we don’t want our consumer data, our data, to get in the hands of someone who’s going to do something bad with it, obviously.

Adam Parks (33:02.708)

Well, in episode number two, I was talking with Tim Collins about how AI is starting to be used for social engineering attacks. And so when we’re talking about where is this data going, it’s not just about where is it going technically, it’s about who physically is going to have access to it and who can open and close the doors to those systems.

Sara Woggerman (33:11.374)

Mm-hmm.

Sara Woggerman (33:20.151)

Yeah.

Adam Parks (33:23.356)

And that’s where it starts getting really scary. And he was talking about a particular attack where it was, you know, it was an attack on him using publicly available information. And we kind of walked through how all of that came together. So I think as people are starting to think about it, that’s definitely an episode to watch. But I’m interested to learn more about these privacy, you know, impact assessments and what that ultimately is going to look like. And how are we going to train all of these auditors and all of these people to do that? And are the vendors all going to be capable of

of doing and I know some of the vendors that I’ve talked with are built on transparency and they’re built on that capability and I wanna say it’s part of their sales pitch now. Look, we’re transparent and we can provide you with an understanding of what happens in the black box and others have the black box but I think the regulators are mostly concerned.

Sara Woggerman (33:55.822)

you

Sara Woggerman (34:09.154)

Yes.

Adam Parks (34:13.532)

with the black box and you may have been able to explain it to the technical people at the CFPB but I think at a state level dealing with California, New York and some of these other states.

Sara Woggerman (34:15.406)

Mm-hmm.

Sara Woggerman (34:22.658)

You’re going to have a much more difficult time. Yes.

Adam Parks (34:25.94)

I think it’s going to be a lot more difficult when they don’t understand the underlying technology and now you’re going to have to kind of explain this at a much lower grade level. We’re not going able to speak in PhD language. We’re going to have to be able to explain everything from an AI perspective at an eighth grade level in order for us to.

Sara Woggerman (34:44.046)

Correct.

Adam Parks (34:44.668)

have it be accepted, not that they can’t understand at a higher level of technology, but I think that’s the level that they’re ultimately going to be looking for. And I think they are going to look at us and say, can you explain this to, and I hate this term, but the least sophisticated consumer, right? Which is, if I remember correctly, still an eighth grade reading level.

Sara Woggerman (35:01.026)

Yes. That’s exactly right. Right. Yes. Yes. And that means that you’re going to have to possibly explain how you’re using it on your consumer-facing website, right, to the least sophisticated consumer. Because the word

Adam Parks (35:17.843)

Mm-hmm.

Adam Parks (35:21.714)

and the privacy policy is going to have to account for whatever information you’re currently collecting and how you’re using it.

Sara Woggerman (35:24.844)

Correct.

Sara Woggerman (35:28.694)

Right. And being very, very clear about that, because again, that word scares a lot of people because they make assumptions off of things they’ve seen in movies and, you know, whatever. Right. This is not just a hot topic for us. It’s a hot topic across the whole country. But, you know, understanding that

You know, we’re trying to, you know, getting your messaging across that you’re using it to actually better their experience. You know, there’s a lot of people who felt really uncomfortable that, you know, Facebook was the first to really do target marketing, right? And do it really successfully. And so if you think about that, but I’m like, it’s highly effective.

Adam Parks (36:14.846)

Mm-hmm.

Adam Parks (36:19.529)

Yeah.

Sara Woggerman (36:28.174)

Facebook sells me a lot of stuff that I didn’t know I needed. it is… Didn’t… Yes, right. So, I mean, they definitely have done a really great job of that. And I don’t mind that so much, right? Like, I like having a more personalized experience.

Adam Parks (36:31.96)

I didn’t know I needed it. Yeah, between I would say between Meta and Amazon, they’ve got that down now.

Sara Woggerman (36:52.014)

Sometimes I don’t love when I’m like, okay, I just tied a conversation. You don’t need to show me all the things about this one thing, right? But, you know, those, I think that most people actually like how we’ve advanced in their everyday life, right?

Adam Parks (37:09.556)

Well, the personalized content piece is starting to get me and I’m starting to get requests on websites for okay, can you help us do AI driven personalized content and my general response is only if that consumer has authenticated because otherwise we’ve got privacy issues whether I’m doing it in Europe, I’ve got GDPR from doing it here I get CCPA and a host of other state level issues from a privacy standpoint. So the question becomes, can I even execute on that unless I know who the consumer is now you can do some behavioral driven things and say this consumer

Sara Woggerman (37:26.882)

Yes. Yes.

Adam Parks (37:39.6)

has come back multiple times, but if I do that, I’m tracking that individual consumer and how many times they came back to the website and is that covered in my privacy policy and is that a violation of one of the existing privacy rules.

Sara Woggerman (37:51.299)

Yes.

Yeah, you’ve got to think through all those disclosures, right? And I know there have been companies who wanted to use Facebook for target marketing just to be like, you know, we’re good guys. Here’s what you do if you got a notice from us, right? And it immediately was shut down. But there some challenges there, yeah.

Adam Parks (37:57.576)

Mm-hmm.

Adam Parks (38:14.216)

There’s some challenges there and I’m currently working on some remarketing type tool sets that I think would be powerful, but it’s our approach is a little bit different than we’re the good guys. I’m not sure if it’s going to happen, but I’m trying to get a bank to do it first because we know we’re going to have to litigate that and I’d rather do it with ABA money than, you know, RMAI money because it’s going to be expensive the first time we go through the process.

Sara Woggerman (38:39.298)

Yes, it’s definitely one of those interesting things that it’s like, if we can figure out a way to do this where it’s not targeting Adam Parks, but targeting a subset of individuals that commonly use us, you know.

Adam Parks (38:53.806)

users that went through the portal and didn’t make a payment, can I then show them content about the value of good credit? And maybe it’s not from the targeted organization, but it’s, you know, financial literacy content to prompt an interaction. I feel like there’s some opportunities there as the debt collection industry quickly becomes the e-commerce business. And nobody really wants to talk about that yet. But in my prediction is inside of 10 years.

Sara Woggerman (39:01.933)

Yes.

Yes.

correct.

Sara Woggerman (39:12.64)

Mm-hmm.

Sara Woggerman (39:17.57)

Right. Well, you’ve got to in addition, you’ve got to combat the insanity of what is on TikTok. Right. So you’ve got you’ve got people telling people to do these crazy things that are harming them in the long run. So it’s like, all right, if you can create a few reputable sources for financial literacy and and help that, you know, I think that we’ve got to do something because

Otherwise.

Adam Parks (39:47.583)

Sarah, maybe you’re the other person for that. We’re gonna be doing a LinkedIn live where we combat bad financial literacy content on TikTok and Instagram. And I’ve got some really great speakers. feel like that’s one, I’ll reach out to you for that one too. It is, look, a couple years ago, I paid a group of people at all different age brackets to go out and find me good financial literacy content on the internet. And what they brought back to me shocked me to my core. I mean,

Sara Woggerman (39:54.738)

Sara Woggerman (39:59.264)

Yeah, this is a serious pain point for sure.

Adam Parks (40:14.536)

people giving the advice of all the debt buyer bought your debt now they owe it. Okay, that’s not even close to true. like, I don’t and stop citing laws that don’t even apply like this. What are you talking about?

Sara Woggerman (40:21.128)

Right?

Sara Woggerman (40:25.667)

Right?

Adam Parks (40:27.512)

And I think we just really want to help the consumers and provide them with good information. And I think the debt collection industry in the use of artificial intelligence is trying to focus that yes, we want to become more efficient organizations, but we also want to provide more self service technology and enable the consumers to engage with us without the shame factor on the phone. And that I think as you look at the different generations, different generations use technology differently. My wife and I use

Sara Woggerman (40:28.44)

Yes.

Sara Woggerman (40:49.153)

Agreed.

Adam Parks (40:55.956)

very different technology to communicate with our friends, right? She’s in Brazil, they’re very focused on using voice messages and they’re texting voice messages back and forth. To me, that’s a voicemail and I haven’t listened to my voicemail since like 1988. So right, like show me text. Everybody’s got those different, you know,

Sara Woggerman (41:08.558)

Yeah, right.

Adam Parks (41:15.484)

methodologies for what they’re comfortable with. And I think artificial intelligence is going to empower and enable people to get the communication that they prefer through the channel in which they prefer it. And

If we can get our policies and procedures wrapped around it so that we can feel more comfortable and confident in the use of the technology, the deployment of the technology, and even as debt buyers understand how our agencies are using it and what questions we should be asking and baking that into our vendor management policies, not only in our IT policies for ourselves, but how are we going to manage this as it relates to third parties?

Sara Woggerman (41:43.598)

Yes.

Sara Woggerman (41:53.838)

Yeah, exactly. so if you haven’t, so as we sort of wrap up this conversation, if you haven’t already, you should be talking to all of your vendors about what their AI plan looks like. Everybody’s thinking about it. Everyone’s talking about it. But you need to understand that, right? So with the debt buyers I’m working with, I’m like, that needs to be in your due diligence questionnaires. And I’m not saying you have to tell them,

what to do at this point, but you need to understand where they’re at in this journey, right? Because again, transparency is key and you’re going to want to make sure that you put in the right controls on your specific data if you’re a creditor or a debt buyer. And then, know, collection agencies, I see so much opportunity here. You’ve been running on really thin margins for years. There’s so much here that can…

You can lead compliantly, right? So you can put in, you know, our biggest challenge is human behavioral errors. Most of our complaints are human. I mean, OK, most let me rephrase that. Most of our substantiated complaints are based on human error. A collector didn’t code something right. Something didn’t happen. If we can cut that margin of error down, imagine what that does to.

the consumer experience, our reputational risk, and just how that can just save us so much time and have a better consumer experience. And then the data can show that so that we can say, this is why we innovated, regulator, whomever that is, AG, from Minnesota, whatever, right? Like, this is why we innovated. We actually reduced our risk, and we reduced consumer harm.

That should be the goal of this innovation and your policy document needs to show very clearly how you made that happen. And keep up, you might be updating this one every three months. That’s okay, that’s okay.

Adam Parks (44:02.781)

I think that’s a reasonable expectation at the current pace of innovation.

And Sarah, I can’t thank you enough for coming on and sharing your insights. As always, you provide some really sound advice on how people can operationalize some of the opportunities that they’re seeing around them in the debt collection industry. And I think AI hub has become a great opportunity for us to openly have some of these discussions about the challenges that organizations are facing in the space and to make people not feel so alone on their AI journey.

Sara Woggerman (44:08.418)

Mm-hmm.

Sara Woggerman (44:21.752)

there.

Adam Parks (44:38.235)

So thank you so much for coming on and participating in being part of the solution.

Sara Woggerman (44:38.371)

right.

Sara Woggerman (44:43.928)

Thanks for having me. Appreciate it.

Adam Parks (44:46.078)

For those of that are watching, if you have additional questions for Sarah and myself, you can leave those in the comments on LinkedIn and YouTube, or you could probably catch Sarah on stage at pretty much every conference for the remainder of 2025. But if you leave those below, we’ll be responding to those or if you have additional topics you’d like to see us discuss, you can leave those in the comments below as well. And I’m willing to bet I can get Sarah back at least one more time to help me continue to create great content for a great industry. But until next time, Sarah, thank you so much. I really do appreciate all of your insights.

Sara Woggerman (45:14.434)

was my pleasure.

Adam Parks (45:15.804)

And thank you everybody for watching. Thank you for your time and attention. We’ll see you all again soon. Bye everybody.

 

The AI Policy Playbook for Debt Collectors

Did you know AI adoption in the receivables industry is accelerating faster than most organizations can document it? With over 400 state-level bills targeting AI, compliance professionals can no longer afford to “wait and see.”

In this episode of the AI Hub Podcast, Adam Parks sits down with Sara Woggerman, President of ARM Compliance Business Solutions, to break down AI policies in debt collection, vendor oversight for AI tools, and the best practices for AI policy writing that every agency, debt buyer, and law firm needs to know.

Whether you’re drafting your first AI policy or fine-tuning your existing framework, this episode delivers timely, tactical guidance you can act on today.

Key Insights from the Episode

1. Policies Must Be Nimble, Not Restrictive

“Don’t shut down innovation—put in the right guardrails and guide your team.” – Sara Woggerman

In the rush to address risk, many organizations are overcorrecting by banning AI tools entirely. Sara emphasizes that this is a mistake. AI policies in debt collection should never be so rigid that they hinder innovation. Instead, organizations need policies that set clear expectations while allowing room to test, iterate, and grow. Flexibility is key, particularly in an environment where tools and capabilities evolve weekly. Define what’s acceptable use, what’s not, and offer employees a channel—like an internal AI committee—to request new use cases or escalate compliance concerns.

Watch this moment – 04:29

2. Vendor Oversight Is Non-Negotiable

“Your vendor’s AI could expose your organization if you don’t know how they’re using your data.”

The use of third-party vendors to implement AI in collections is rising, but many agencies lack a formal strategy to evaluate these relationships. Sara warns that vendor oversight for AI tools must go deeper than checking off certifications. Agencies need to understand where their data goes, how it’s processed, and whether it’s being used to train broader models. This means integrating privacy impact assessments into standard vendor management practices. You should also examine contracts closely—if your vendor has vague data ownership terms, you could be exposing consumer data in ways you never intended.

Watch this moment – 07:45

3. Human Oversight + Data Validation = Policy Strength

“The model might be right, but regulators need to see your proof.”

It’s not enough to trust AI-generated results—you must validate outcomes with human oversight and documented processes. Whether your model is analyzing contracts, resolving disputes, or scoring behavior, it must be monitored for accuracy, fairness, and bias. Sara and Adam discuss how teams should routinely audit results and run statistical analyses to check for unintended discrimination or UDAAP risk. Regulators will not simply accept that you “used an AI tool responsibly”—they will want to see that you tested, reviewed, and governed those outputs appropriately.

Watch this moment – 18:00

4. AI Compliance Will Require Ongoing Education

“The person responsible for AI should be getting professional training every year.”

The industry is seeing a shift where compliance professionals are expected to become fluent in AI oversight. In fact, Sara points to recent updates in the RMAI certification redlines that propose requiring annual professional training for any team member responsible for AI governance. This highlights how critical ongoing education will be in the years ahead. Compliance teams can no longer act in isolation from IT and data science teams—they must work collaboratively to develop, audit, and evolve internal frameworks that meet evolving laws and consumer protection standards.

Watch this moment – 25:54

Actionable Tips for AI Policy Success

  • Create an AI Use Committee to review new ideas before implementation.
  • Update your policies every quarter to keep up with evolving tech and laws.
  • Include disclosure language on websites or portals where AI is used.
  • Begin with low-risk use cases like data analysis or internal process automation before scaling to consumer-facing applications.

Timestamps to Key Moments

  • 04:29 – Why AI policy matters more than ever in collections
  • 03:40 – How to form internal AI committees and test ideas
  • 07:45 – Understanding the real risks of third-party AI tools
  • 12:08 – Where to begin your AI journey (operations vs compliance)
  • 18:00 – Human validation and policy documentation best practices
  • 25:54 – How state-level laws are evolving and what to expect next

Frequently Asked Questions About AI Policies in Debt Collection

Q: What are AI policies in debt collection?
A: A formal framework that governs how AI is used within a debt collection agency or law firm, including guidelines on data usage, oversight, compliance, and transparency.

Q: How do I oversee AI vendors effectively?
A: Conduct privacy impact assessments, require full disclosure of data use, and ensure contract language limits unauthorized data access or reuse.

Q: What’s a best practice for AI policy writing?
A: Start with a small internal use case, define acceptable tools and usage, and create an escalation process for AI decisions with possible compliance implications.

Q: Are regulators enforcing AI policies yet?
A: Yes—state AGs are already active, and new AI legislation is rapidly emerging. Regulators may review years of activity retrospectively.

About Company

ARM Compliance Business Solutions

ARM Compliance Business Solutions is a woman-owned U.S. based consultancy that serves creditors, collection agencies, debt buyers, collection law firms, and receivables service providers.

Our services are designed to provide organizations of all sizes the tools and skills to overcome their unique compliance and business risks related to consumer financial laws. We bring operational strategies and compliance processes together.

About The Guest

Sara Woggerman

Sara Woggerman