Avoiding AI Trouble in Collections: Legal Risks, Strategies & Regulatory Insights
In this expert-packed episode, Nabil Foster of Barron & Newburger breaks down the legal risks of AI in collections, delivering actionable guidance for credit and collections professionals navigating today’s complex AI regulatory environment.

Listen to Your Favorite Podcasts

Adam Parks (00:01.41)

Hello everybody, Adam Parks here with another episode of the AI Hub. Today I’m here with a gentleman that has such a broad breadth of knowledge around both the legal and a great understanding of the technical, Mr. Nabil Foster with the Barron Newburger firm. How you doing today, Nabil?

Nabil Foster (00:20.189)

I’m doing great, Adam. Thank you so much. And your introductions always make me blush just a little bit. You should write my taglines for my LinkedIn stuff. I’d get a lot more posts or lot more reactions.

Adam Parks (00:35.634)

Well, that’s always on the table. I really do appreciate you coming on and having a chat with me today because I think AI is such an important part of where our industry is headed and understanding the regulatory framework around that, some of the privacy rules and the mishmash of things that we need to really be aware of as we start to leverage artificial intelligence across the industry. But before we jump into our conversation today, just a quick word from our sponsor, Latitude by Genesis.

Adam Parks (01:07.266)

All right, Nabeel, thank you so much for coming on and having a chat with me today. For anyone who’s not been as lucky as me to get to know you through the years, can you tell everyone a little bit about yourself and how you got to the seat you’re in today?

Nabil Foster (01:18.981)

boy, okay, what about myself? All right, I’ve been doing this for a minute as they say in the South. So I came out of law school in 2000. you know, worked for a judge and worked for a few different firms and really came into this space back in like 2007. And so yeah, I’ve been in this space since then.

And the ARM industry has certainly gone through some changes and has been some growth spurts and some contractions. we’ve had a few cases that have sort of upset the whole industry and then overturn the apple cart and then we start to put Humpty Dumpty back together again. What I am, yeah, so I.

I do this litigation stuff, I’m pretty good at it, I like it. I teach at a law school, I teach trial advocacy and legal ethics. So, how do get to the seat I am now? Well, being friendly with people in the industry and doing some good work. And then a little bit of luck that always comes into the factor of everyone’s life. So, sorry, that may not be the best answer, but you know.

Adam Parks (02:32.429)

It does. No, that’s great. But isn’t luck the intersection of, or it’s Opportunity is the intersection of both luck and preparedness. And so I think having the experience coming back to 2007 and living through some of the economic cycles that this industry has gone through, I think provides you with some insights to where it’s going as well. Because we can never predict where we’re going unless we can understand where we’ve been.

And that’s where I think this conversation starts to get really interesting as we look at artificial intelligence. Can you help me understand kind of the lay of the land as it relates to artificial intelligence in the debt collection space from a, like how are the regulators and the legislators ultimately looking at us when we’re looking through an AI lens?

Nabil Foster (03:20.469)

Ah, well, that’s a very good question because that’s not apparent from most of the discussion that you see about AI, because most of the discussion about AI in the press is about, it’s from an investor standpoint, valuation of this company, the price of a particular stock of a company makes chips for AI and energy plants, blah, blah, blah. There’s all this buzz around it, but when it starts to get a little more focused into

the application of AI into a heavily regulated industry, the perspective changes a great deal. I’ll just say it becomes a bit darker. Read the Financial Times, Wall Street Journal, it’s blue sky, sky’s the limit on these things. But remember, that’s out in the consumer space, right? And for the US economy,

the consumer space is the biggest sector of our economy, know, consumer spending on stuff like that’s what drives this engine for the most part. I think it’s like 70%, something like that. And so therefore AI as it’s implemented on, you know, use in from people’s, you know, social media things, or, you know, they’re using the right term papers or they’re, you know, using it inside companies to, you know, to create content for…

annual reports, what have you. Like, this is all, you know, that’s all sort of what I consider in the broadest perspective, like consumer facing, you know, it’s like, okay, or consumer market. I got to be careful how these were consumer in our context, it means something very special, very specific. So I don’t want people to get confused. When you talk about regulators, they start looking at this with a far more jaundiced eye, a far more skeptical eye regarding, well, what are you using? You know, what are you using this for? How’s it going to,

Adam Parks (04:58.574)

Sure.

Nabil Foster (05:14.865)

asking use case scenarios because they want to know, how does this get rolled out? What does it look like? What is the consumer in the sense that our industry understands it? What is the consumer faced with? What are the choices? What are the options? So it’s a far more sort of detailed run of show regarding the implementation of that when it’s consumer facing. if you’re using, and that’s for the most part what I call things on the what’s called the front of house.

you know, anyone who’s worked in a restaurant knows about front of house, back of house, right? And so I think the same analogy works here. Those regulators, they focus much more on the front of house, right? And so things that are, that people are putting up a, know, AI chat bot to actually communicate with consumers about their debts, whatever, that’s gonna get a lot of scrutiny, a lot of scrutiny. And you gotta have answers for all that, for them to be satisfied. Otherwise, you know,

Adam Parks (05:47.992)

Yeah.

Nabil Foster (06:12.991)

They don’t understand how this is happening or the risk involved. They’re more likely to be skeptical of it and you can run into problems. I’ll stop there.

Adam Parks (06:25.686)

No, I mean, look, they always talk about the black box and the challenges of the black box itself. And what does that ultimately mean? Information going in, information coming out, your inputs, your outputs, and how is that being processed within the black box? And I feel like early on that was really the case. A lot of the tools that were out there.

You put some data in something came out the other side and from a scoring model perspective will use you know account segmentation is an example of a use case. That’s. That can run you into some trouble and we can look at the Goldman Sachs instance when Goldman Sachs are issuing the Apple card. And they were issuing different credit limits to different people based on sex right so gender was a driving factor in their model for some reason now.

I’ve started to look at it. The black box feels like it’s starting to dissipate because everybody realizes that you have to be able to understand what’s happening within the model and explain it because if you can’t explain it, can’t use it. But is it a becomes a question of using account characteristics to drive treatment and decisions versus

behaviors to drive treatments and decisions and kind of the differentiation between those two worlds. Any insights on what you’ve seen or experienced between those two worlds, groups that have been using, let’s say, the raw math versus the behavior, looking at those account characteristics, because you can’t really break it down and treat them differently based on zip code. You’re going to run into redlining and other issues. But behaviors.

Nabil Foster (08:01.269)

That’s correct. You’re using things that, because if you look at it as like a straight math problem, you’re like, what’s the objective data that you have? Though you’re right, those are the things that are easily understood if you’re doing things by zip code, right? Or do things by golden text, by gender, right? those sort of hard data points, that can get you into trouble because it says like, if that practice, a singular data point, but now if you got,

a couple data points and that can be pieced together into somewhat of an image of, well, you are targeting certain populations over others, you’re giving different rate. That’s where you’re gonna get into the inquiries because if you that’s the smoke, right? That’s the hint of the smell of smoke that draws the regulators and they’re like, oh, is there a fire here, right? And nobody wants to be.

Nabil Foster (08:54.309)

the subject of a fire inquiry from a regulator because if they think they smell smoke, imagination can be the type of thing that it’s a self-fulfilling prophecy. Well, I smelled smoke. There must have been something burning. It may not be burning right now, but something was burning. So we’re going to keep looking until we find where the traces of the fire were. And you’re stuck in a CID for nine months, and you’re like,

Adam Parks (09:13.422)

That’s it.

Nabil Foster (09:22.473)

hundreds of thousand dollars out the door on lawyers like me or other people like, know, trying to patch up the dyke or try to put Humpty Dumpty back together again, you know, it’s better just to avoid it. And go back to your question, it’s the behavioral that probably has the most practical use case and the one that’s more individualized, so therefore a little bit less subject to…

the type of scrutiny of class treatment. And I use that word as a play on the word because you want to avoid class actions, right? And how do you get class actions? You have a uniform way of treating or doing something across the board. And if the way you’re doing that violates one of these statutes, whether federal or state, which guess what people?

we’re gonna see a lot more state regulators stepping into this and you’re gonna get a hodgepodge mix of restrictions regarding data privacy, what have you. I think you and I talked a long time ago, it’s it’s not too far off where everyone’s gonna be considered a data broker regarding just the information they have and.

Adam Parks (10:34.562)

Well, based on the rules they were trying to pass, literally everybody on the planet with a contact list on their phone is a data.

Nabil Foster (10:44.117)

Yeah, yeah, it becomes absurd of like the definition. They write them so broad because they’re trying to cover. They have in their minds, the legislature, they have in their minds certain things that they want to cover and then they want to make it just a little bit broader in case they cover something they couldn’t think of. They don’t bother to think about all the other stuff. They’re just sucking up with that same definition and like, you know, it’s like the, becomes absurd, becomes absurd.

Adam Parks (11:08.204)

the unintended consequences of their actions. they often it’s there’s unintended consequences. And the challenge for me is that a lot of them, it’s not that they don’t understand it’s that they don’t care to understand. that becomes the challenge when we talk about especially at a state level, it’s not as well thought out, right? You’re not dealing with the same caliber of education, you’re not dealing with the same caliber of experience of business experience, as you are potentially at a federal level, because some of these states are

wild for lack of better terminology. It’s hard. So talk to me a little bit about that change. We see the CFPB, I don’t wanna say shutting down because I don’t see that as ever being a reality, but they’ve, let’s say taken a step out of the, they’ve taken a step out of the forefront.

Nabil Foster (11:50.525)

It’s shrinking, it’s shrinking right now. So therefore, yeah, they take a step back. And one thing, one of those laws of nature is that nature abhors a vacuum, right? So the space that the CFPB was filling, occupying most of the field, as that retracts, that creates, guess what, a vacuum. And what gets sucked into there? It’s, you know, choose your usual suspects of AGs or others.

other state regulators wanting to step in where they think that there’s a lane, they’re probably going to go down that. And we’re in an ever increasing environment of testing of boundaries, shall we say. And what we find in the regulatory space is probably going to be the same sort of testing of boundaries because if the CFPB is not occupying the space, then

others are going to step up and step into it in their little corner of where they have their jurisdiction, right? And that’s, therein lies the problem. It’s like, well, you know, the way they make rice in Georgia is kind of different how they make rice in Seattle, you know, in Washington state. I’m like, okay, well, you know, each way they say, well, this is how we do it here. And, you know, you’re sort of subject to…

Adam Parks (13:15.316)

and a possibility to manage your business. Because now you’ve got all of this mix and match and then on top of that you’ve got let’s say the plaintiff’s bar basically coming in and trying to create new rules in the absence of the CFPB through litigation. And I think we’re gonna see that step up as well.

Nabil Foster (13:28.543)

Well, yes. Yes, that will step up. Although I haven’t seen the real uptick in the plaintiff’s bar, you know, because of the amount of chaos that’s there. It hasn’t settled down as exactly where, you know, they want to really start to lean in, You know, it’s, and I don’t know if we’re ever going to see another, remember, Hunstein and that whole debacle, right? Like it started, I think it started in the 11th circuit, right? And then it just rippled out and it’s like, okay, so, you know.

There’s a trickle of that now. For those of you who remember, for a time period, seemed like every case being filed had a Hunstein claim in it. And we won’t tell people what it is because it’s fading into the past. We’ll just leave it there. Just let it fade into the past where it belongs, exactly. So there’ll probably be something else that will come up, but it may not be that sweeping.

Adam Parks (14:17.24)

belongs.

Nabil Foster (14:24.819)

show up in certain markets, in certain areas where you have interaction with state legislation, with data, dealing with our requirements. It also comes back to as more more AI is adopted into systems that people are using because it’s getting back to the front of house, back of house operations. The front of house is easy to say, well, be cautious about this because

If you have a chat bot answering the consumer’s questions about how much their debt is, what’s the amount of debt without the fees? And then the chat bot says, tries to do a calculation. And they come up and that chat bot hallucinates and says, comes up with some sort of number that’s not based in reality. In a heavily regulated industry, all right, you’ve just walked yourself into either a regulatory complaint or a lawsuit because now,

Instead of owing, instead of the consumer being told he owes $150, now it’s $15,000. It’s like, how did that happen? They take a screenshot. They’re like, OK, that’s enough for a complaint. And now you’re left explaining that. So the hallucination stuff aside, which still is a bit of a mystery and a problem, is what I would call the supply chain security.

in AI implementation and that’s where to get your point about regulators. I think that’s where they’re most likely, they’re mostly going to focus on trying to ratchet down on, well, do you know how secure your, you’re using this particular product or this type of technology in your say back of house operations to do modeling, what have you. You’re using AI, you’re using something and how confident are you that that

that that AI system is A, secure, B, doesn’t have vulnerabilities. And that becomes difficult because the people in our industry, they’re not software engineers. They’re buying products. And as an example of this, I’ll take it back to, remember, some of you may remember back in 2017, there was a huge data breach of, I think it was Equifax. They didn’t apply a patch from an Apache.

Adam Parks (16:31.948)

Yeah.

Nabil Foster (16:46.069)

Apache software systems, there’s some vulnerability. They came out, they created a patch, and then just the system engineers at Equifax, I think it was Equifax, they didn’t apply that patch. That led to a data breach that happened over period of like 70 days or so in which 148 million people’s information was exposed. That’s half the US population, right? It was so big, was even in 2018, there was like a congressional hearing on it.

Adam Parks (16:58.243)

Thank

Nabil Foster (17:16.593)

know, 75 pages of rambling on of like, know, wherefore, you know, mean, just, you know, just collecting data. mean, it, yeah, yeah.

Adam Parks (17:22.946)

Well, and what about the application stack there too, right? So you’ve got not only are you giving it to your vendor, but what LLMs is your vendor using? So you may have fourth, fifth, sixth party disclosures depending on how deep that application stack runs.

Nabil Foster (17:35.957)

That’s exactly right. the whole point of what people don’t realize is that this AI stuff, whether it’s a lot of it seems to be pivoting in the industry towards open source. That’s what DeepSeek, which came out of nowhere back two years ago, the Chinese-based LLM model, came out of nowhere. And it runs on

Adam Parks (17:47.854)

Thank

Adam Parks (18:04.696)

with no borders.

Nabil Foster (18:04.885)

on a fraction of the resources that open AI requires, right? And it’s all open source, which means, what does open source mean? It means you get to see the code that’s actually running this thing. And it’s like, well, how is that? Well, the idea of open source, people like that idea because, oh, you can see what’s in there. Well, yes and no. You can see what’s in there, but you have to take the time and you have to have the knowledge to know what you’re looking at. If you don’t do that, then you can open source.

you become vulnerable to what I guess what you might call it like packet attacks. And that’s not the right word, but basically how all these open source software products, it’s architecture is designed for people to look at and then create little packets or little packages of additional code to help modify and improve the operations of the base programming. The problem is that those packets are where people can insert malicious code.

And if you don’t get it from a good source or somebody maliciously replaces the packet on a source and you download that and install it, you now have just, it’s the equivalent of the phishing attack where people say don’t click on links of documents that someone sent you that you don’t know. You now have a problem. Now if you have an IT person somewhere in your supply chain and they’re like, well, let’s get that packet.

and they go to one packet distribution hub and like, that server’s too busy. Well, I’ll just go to the secondary one and they grab one from there because these libraries are all over the place. They just inserted this poison packet literally and now that’s the weakest link. So the whole chain now is compromised because now

There’s malicious code that’s been inserted on that third party vendor down the way. it’s now, they got a back door into the system and so forth and so on. And that’s where I’m saying I think the regulators are probably going to focus a bit more attention on. And I don’t see anybody who has a real solution for that because it requires organization, coordination. But in a way that doesn’t start to trigger people saying that this, you know,

Nabil Foster (20:26.453)

There’s some sort of industry collusion there, like anti-competitive practices and so forth and so on. It’s like, look, we just want to make sure we don’t get a poison packet installed in this open software. Anyway, I’m going on to.

Adam Parks (20:40.224)

No, I think that’s a you make a really good point. It actually leads me to another question. As we think about the way that let’s say consumer attorneys have created these scripts for consumers to call in and try and trap collectors into saying the wrong thing. So when they start figuring out or they start getting a little bit more complex, aren’t they going to try and do the same thing with the AI? Like if they realize that there’s

Let’s say a hallucinated response to a particular question that they’re just gonna try and get a whole bunch of consumers together to submit that same question and start to create this Storyline that they can bring to the court. I mean, it’s a manufactured storyline, but I would expect them to do the same thing They’re manufacturing the storyline on the phone. Why would they not do the same thing? through an AI tool set

Nabil Foster (21:27.879)

And actually the AI tool side is probably easier from the do that because it’s all typed out. It’s not a recording. You have the chat on the screen and they just copy paste and that’s exhibit one to the complaint. I think your sense of caution regarding the vulnerability to manipulation is real. And that’s another reason why it’s there.

all these sort of cautionary tales about, my personal opinion, I don’t think any of the AI is ready for consumer facing direct communications. It’s, think, the risk. And we talk about hallucination, that goes back to the black box. There was a study done last year in June of 2024. It was the first systematic study done. that Stanford? Is there, we’ll call it the Reg Lab or the Human AI Center for something or other.

If you Google, you’ll find it. 2024 study, and they found that, and they focused on using the publicly available, know, chat, so ChatGPT, or the ones that are publicly available. And they use that to focus on legal issues, right? Try to use it for implementation and the law. this factors more into, you know, my wheelhouse of legal ethics, teaching that with the lawyers and the risk to lawyers of using these.

Adam Parks (22:27.17)

Google find.

Nabil Foster (22:53.257)

these chatbots to write legal briefs or do things, it found that those LLMs hallucinated at least 58 % of the time, that they struggled to identify their own hallucinations and often uncritically accepted incorrect legal assumptions and the cues, right? So basically they’re saying there’s no quality check. The old adage garbage in, garbage out is what they’re saying when it comes to the law. Now, why this is interesting is because

Adam Parks (23:13.047)

Interesting.

Nabil Foster (23:22.985)

we often think that, well, we have all this legal reasoning, we have all these legal opinions, we have all this clarity, we have all this, know, lawyers charged by the word, and you know, they just go on and on and on. And you would think that that could correlate to some sort of mathematical probability that these chatbots could, or these LLM could figure out and use.

Well, apparently not. It’s either it’s the inputs are just bad or there’s still a whole lot more of the hallucinations stuff that they don’t understand in the neural network of the training and the waiting as you know, some people don’t know. So when you have an AI model like the ChatGPT or DeepSeek, whatever, that’s a model and they come up with different models. Models consist of the actual programming code. Then there’s the training of that sort of that.

that sort of code model, and then there’s the waiting. And what does that mean? Training is the actual data set that you try to have a curated data set to sort of help these algorithms and this code start to make the appropriate associations between particular data points. And they start to see that. And then how do you help it? Because otherwise, it’s just going to run randomly. You use the waiting as a way of trying to say, no, no, there’s a closer relationship between these two data pates.

than these two data points. Perfect example, you ask AI Engine to be able to look at a photograph and count how many cats versus how many dogs are in the photograph, right? Well, you and I were like, well, I know what a cat is, dog is, right? But to the AI Engine, it has no idea. Both of these creatures have four legs, they got fur, they got ears, got, know, some have big tails, small tails, right? But there’s a combination of these data points of this close association that

that focus on the characteristics of cats versus characteristics of dogs, right, and different sizes. We’re able to adjust that because of the way our brains work. These models, you gotta use weighting to help it sort of figure out that it starts to understand or recognizes the shape of furry ears and a furry tail. Okay, that in connection with some other things means it’s more probably a cat versus a dog, but then you need to look at some other things and that’s the weighting part.

Nabil Foster (25:43.219)

And that’s where the regulators also probably are going to, I don’t think regulators are going to be able to focus or understand what the coding is or the training, but the weighting, they probably will probably focus on that because the weighting is how, that’s the user input from the humans regarding this is what we think is the right answer. So we’re going to adjust these levers so that the model here produces this result.

more often than the wrong, what we think is the correct result. Now, the problems that the intentions are sometimes on use case, very specific use cases. The unintended consequences of how things are being weighted, that’s where you don’t know where it’s gonna come out. And to your point, Adam, it’s like, go back to the golden sack, golden sacks, it’s like, there probably wasn’t anyone who said,

we’re gonna use gender as a way that’s gonna really be a determining factor for how rates of interest or credit extension is going to function. But that data point or that weighting towards one or the other, like there probably is some association with other data points that that model said like, okay, well, we are being told to weight towards this characteristic or this thing and.

Nabil Foster (27:09.555)

that has association more with this gender as opposed to that gender and so therefore the model learns is, well this one’s more, we’re gonna wait this one more. I wasn’t there, I didn’t program it, but that’s the type of thing that…

Adam Parks (27:25.262)

If it’s trained on the bias of human decision making. So if there were humans making that decision and this model was trained based on those bias, then that AI model is also going to have that bias. But I also want to go back to the hallucination for a minute because this is one of those really interesting things for me. And it’s been my experience. If I ask it to write a whole brief, I’m going to get trash as a result, because these models are not great at doing long form anything.

It’s just not built for that purpose. But if I think about it and I send out small pieces and I have it write paragraphs at a time, I get much better results from it. And I think that’s something that we don’t really look at all that much when we’re looking at the products or evaluating products. And I think that’s going to become

the result of that is going to become very important over time. We’re responsible for the output of any model that we’re using. So if I’m going to use it to write a, I mean, I’m not a lawyer, but if I was to use it to write a legal brief, I’m still responsible for everything in that brief as the courts have held through the false, the issues that have come up over the past couple of years, false citations and all of that, know, citing cases that don’t exist, all of those types of things. And when you start asking it to do some math,

Nabil Foster (28:32.35)

Yes.

Adam Parks (28:45.26)

You really do always have to ask it a lot of questions about how it got there. right? And it’s, I want to be all fourth grade about it, but show your work. Show your work.

Nabil Foster (28:52.499)

It really is. I know you’re absolutely right. You’re absolutely right. It’s show your work. It’s like I was thinking about this the other day when I’m like, you know, it’s, you we all, we all have tests and like you, you know, go through school and there’s a workspace and it’s like, okay, you get teachers are looking for, did you get the right answer? But they’re also looking to see, did you identify your work? How did you get to the answer? Because especially multiple choice questions, like they want to know the work, not just that you got the right answer. Although that’s also important, but how you got through the work.

tells the teacher something. So that is absolutely correct. I think your point about writing a whole brief, as these tools become more sophisticated, it’s going to require better human judgment about where they’re deployed and the use, the specific use cases, like the idea that, just, know, AI is like, know, MiracleSav. You just put it on there and, you know, just stuff happens. That’s not going to work, you know.

AI is not really going to be for a lot of people in our industry. AI is not going to be a real replacement for full-time equivalence. But there can be tools that can help your full-time equivalents do their jobs better because they’re, instead of having, for example, let’s say you have on the back of house, you have modeling, you have a thing, and they help.

You help your frontline collectors, people who are making the phone calls or responding to emails. Although from a security standpoint, I would suggest there’s a lot of security people talk about don’t have open email accounts where people are sending stuff or receiving stuff because that’s, there’s a lot of open vectors to be attacked and get infected. So I’ll just say that some people use email. I think it’s, I think it’s unnecessary risk because you’ve got to get a

You got a lot of people who just might cycle through your organization. They don’t really care. And they’re like, yeah, sure. Well, I wouldn’t do this on my home computer, but sure. My work has good security, so I can click on these links and not worry about it. Then your system’s infected. But my point is that they’re tools that can make those people more effective or more efficient at their jobs. And they’re never going to replace them entirely, but they can help them do their jobs better, because it’s a tool.

Nabil Foster (31:20.437)

But it comes down to human judgment of like, well, where’s a good place to implement this tool? You it’s like you look at a tool chest and you’re like, okay, I got a chisel, I got a crowbar, and I got a screwdriver, right? Which is the best tool for what I wanna do? Now that’s really a human judgment issue. And that is something AI is never gonna be able to replace. The human judgment, never gonna be able to replace. The ability, the human capacity to take a pause and to question yourself, to say, am I doing this right?

That’s something that, that, quite frankly, only humans really possess. It goes back to the old adage, you may have heard this, that anybody who’s done a little woodworking, it’s like, measure twice, cut once, right? Measure twice, cut once. Now what does that really mean? It means, okay, you think you know what the dimensions are for your table, and you measure it, you measure it all out, you’re like, okay. Now before you actually start cutting wood, you’re like, all right, let me sort of.

Go back to basics. Let’s just start. Let me get the tape measure out and make sure I measured this correctly, right? And you’re doing a verification. And now the process of us going through that, you would say that a machine can measure it again, but the problem with the AI measures that they perform the functions exactly the same function. It’s the order of operations as you talk with mathematics, they’ll do it the same way. It really is the human capacity to say, I’ve measured this before I implement, before I cut. Let me just, let me just.

make myself feel certain that I got these measurements right. And then as you go through it again, it takes a little time. It’s where you find, wait, I was off by a quarter inch here. okay, user error or what have you. Better you should do that then and you get it right. AI doesn’t have the capacity to catch its own mistakes. Even if you ask it again, write a paragraph about this, summarize these facts. And then you ask it like,

did you make any errors in this thing? And the AI will be like, nope, everything’s correct. it’s like, that’s where you need.

Adam Parks (33:17.614)

It’s all about how you ask that second question. though, right? That it’s all about how you ask that second question. And are you using the same model to watch your model? So asking the model that just spit something out, is this accurate is one thing asking another model if that was accurate, maybe something else, but that’s starting to get rather complex. And now as we talked about that application stack and all of the things that come along with, you know, a third, fourth, fifth, sixth piece of software in that stack.

Nabil Foster (33:22.825)

Yeah, or

Nabil Foster (33:35.455)

That’s correct.

Adam Parks (33:47.768)

I think ultimately creates some new challenges. And we look at the use cases across the debt collection industry. And I think there’s some use cases that are going to catch fire faster than others because they’re safer plays. And behavioral scoring, understanding mass data sets, and just trying to understand the behaviors of different.

Segments of consumers based on the actions that they’ve taken and when I say actions I’m to opening an email opening a text message clicking on something going through and authenticating themselves I I truly believe that the debt collection industry will become mostly an e-commerce business in the next ten years What is that ultimately gonna mean in terms of trying to understand it? So as I as I continue to make that statement and some people give me the weird look and other people go Yeah, it really is going in that direction

I think there’s a lot of us that are still pushing back against that theory. However, in trying to prepare for it in my own mind, I’ve started looking at massive amounts of behavioral information from websites to better understand how the consumers are using it, how the websites need to change in order for consumers to do that, and looking at the e-commerce funnels and comparing that to the debt collection funnels. Because every sale…

Right? When you’re collecting debt, you’re still selling that consumer on paying you versus the other four to five people that they owe money to. And so that sales funnel and the idea of an e-commerce funnel versus a debt collection funnel, I think ties together quite nicely. as you start looking at that data and where are people dropping out and why, how is that going to impact the messaging that we’re sending, the subjects on the emails that are outbounding, the language that we’re using in text messages, et cetera, et cetera, et cetera.

Nabil Foster (35:11.285)

Correct.

Adam Parks (35:34.23)

It’s similar to the way that we’ve tweaked scripts through the years, but now based on behavioral information, you can continue to tweak that content going forward. Now using artificial intelligence to determine which of these preformed messages that my lawyer has already approved is the best message for that consumer is one use of AI. Having generative or a genetic AI actually draft that on the fly and send it out without a human checkpoint is a very different animal. And that’s where I really want to start looking

as an industry at what makes sense and what doesn’t make sense for us as a highly regulated industry.

Any thoughts, like have you seen anything like that in the wild where people are actually sending the emails? mean, some of those generative AI voice tool sets is responding to those consumers. I don’t know that that’s fully scripted, whereas selecting the appropriate preformed message seems like a much safer bet, but it feels like we need to crawl before we walk.

Nabil Foster (36:12.223)

Yeah, no, that makes sense.

Nabil Foster (36:36.405)

I would most definitely agree that crawling before walking is going to reduce the amount of pain. If some people fall, it’s when. Because in the rollout of these type of tools and where they want to implement it, one vendor, one implementation versus another, it’s changing so rapidly that it’s not like

Nabil Foster (37:05.308)

It’s not like anything else we’ve had in the past, right? Everything up until now has been, you know, yes, there’ve been leaps in technology, but, but, but it’s, it’s been along a curve. You know, the, the, the increase of the capacity of, of, of nano processors of like how many operations per second they can, they can, they can like it doubles every six months, whatever, whatever it would, and that has to do with

the refinement of manufacturing processes and it’s something that can be defined. The advent of cellular communication technology, Years ago, back in the 80s, was, you had to have a brick, it was a brick phone, right? It was a thing that weighed probably 10 pounds. Yeah, it was huge. There’s a battery pack that was literally like 15 times the size of the handset that was that part of it.

And that was, you know, that’s because it was, that was now we have, you know, the, the, the flip phones, whatever. And it’s like, it’s, it, that technology has just, you know, as increased, increasing complexity, but it’s along the same lines, like it’s predictable, like there, no one’s going to think that the, that there’s there, their cell phone is going to start, you know, is going to start spying on.

Adam Parks (38:23.342)

Text messaging became ubiquitous in 2007, right? the launch of the iPhone, the beginning of Android, shortly thereafter, really started text messaging. Well, it’s 2025 and I’m still talking to debt collection groups about implementing text messaging. So let’s not pretend that we’re a fast moving industry because we’re so heavily regulated. The fear that I see now is if the technology is moving so fast,

Nabil Foster (38:43.913)

That’s exactly right.

Adam Parks (38:50.242)

Right? And we have never really kept up with technology in that realm previously because of the regulations associated with our space. What kind of threat level does that create for us when we start running so fast that we trip and fall? And over the last couple of years, we look at quite a few different, very reputable, well-known organizations across the space that are not here anymore because of data breaches.

Are we then going to see a wave of AI related incidences, whether that be the cause of a data breach or litigation or issues or class actions that are specifically related to the use of artificial intelligence?

Nabil Foster (39:28.885)

I think there’s a substantial risk of that and it goes back to those malicious packets that are out there in your supply chain. You implement more more complex systems that you rely upon, vendors upon vendors, and the fourth party vendor who’s supplying a component that’s being integrated into your product, yes, it’s all open source, get the, but.

but they’re in lives of vulnerability. And guess what? The people who are looking, the criminals who are looking to siphon all that data, they are going to each and every system out there and they’re looking for a vulnerability to exploit. here’s the thing, the debt collectors, this particular industry, the core data that drives this business is the exact data that

the identity thieves and people want to sell them, they want to get their hands on. If they have more trouble getting it from the banks, right? If they can’t get it from the banks or the credit bureaus, guess who they’re going to look at next if they’re not already looking? They’re going to come tiptoeing into the systems, trying to find ways into. And the fact that we start having consolidation of vendors or certain supply chain of like, well, everybody’s using, you know,

 

Nabil Foster (40:49.071)

one of three systems for interact with consumers. And that uses some sort of open source or product that now there’s someone happens to install the wrong packet. It’s like, okay, all the accounts that pass through that agency are now exposed, they’re all being exported behind the backgrounds, you don’t know. And if you don’t have any sort of

policy procedure to sort of check your network traffic, right? You don’t know. That’s how back in the experience, that’s how they figured out there’s a data breach is that they had a network analysis done and they just, again, exactly like, wait, there’s some network traffic that we’re not expecting here. know, like where’s this going? And they’re like, okay, that’s a problem.

Adam Parks (41:31.394)

I saw packets leaving that shouldn’t be leaving. Yeah.

Adam Parks (41:39.278)

Extraction of large portions of data is one of those big issues and the debt collection industry is quite literally a honeypot.

Nabil Foster (41:47.049)

Yeah, yeah. So, so you’re putting.

Adam Parks (41:48.61)

Right, by all definition.

Adam Parks (42:09.326)

You there? I lost you there for a minute.

Adam Parks (42:18.156)

You might have to refresh either at eight year cash. I’ve only seen this happen once or twice before.

Yeah, leave and come back. That’s okay.

Adam Parks (42:34.734)

you

Nabil Foster (42:43.454)

All right, sorry.

Adam Parks (42:44.866)

That’s all right. That’s all right. That’s what I get. Believe it or not, this whole podcast is edited using AI. So just

Nabil Foster (42:50.633)

Yeah, yes. That’s good. Sorry, I don’t know what I was talking about. I think you’re, to summarize,

Adam Parks (43:23.982)

Lost you there for a minute, now you’re back again.

Nabil Foster (43:28.792)

Yeah, new. Okay, this is.

Adam Parks (43:30.094)

That’s all right. It’ll continue on a new track. That’s no big deal.

Nabil Foster (43:32.78)

Okay, now it seems to be back. It’s the growing area of data privacy and control of data that I think you’re right that there aren’t a lot of guidelines for how is that going to play out and how to prepare yourself for those risks because you have different states coming up with different data privacy regulations and regarding if there’s a breach or for that matter, how do you know?

you know, one of these date, one of these data breaches, you know, that could be, that could end, that could end a company, meaning like just the notice provisions that, that are imposed upon a, data breach from, you know, not only from a federal standpoint, but on the States where you have. if you’re collecting in multiple States, each one of those States is probably going to have its own data breach notification protocol of like, what are you supposed to do? Now, practical advice for people listening to this is

There really is genuine value in cyber insurance for data breaches and they, there’s no reason why you have to reinvent the wheel to try to figure this stuff out before. If you have a data breach, that’s what that insurance is for. And they have people who have all that sort of the system’s in place to be able to take care of that. It’s, it’s my small firm. one of the guys who joined us in our, in our Houston office, he came from one those large firms that does the data breach. So he has,

He has a data breach practice and handles litigation. But the best advice we give clients is like, hey, if you have a data breach, go with your policy. Your policy, they’re going to have the panel counsel, they’re going to have all this stuff in place so they can do the notifications, they can notify. It’s going to be effective. But once that’s done, you don’t have to continue to pay the $1,000 hour for that.

though for those particular panel firms to deal with any subsequent litigation. can go choose your lawyer of choice who knows something about data breaches. And that’s sort of like the aha moment for some people. Like, you get the benefit of this policy, but you’re not locked into just one of five firms who handle quote unquote data breaches. Because there can be some big money cases there.

Adam Parks (45:51.143)

Sounds like a very expensive endeavor and probably the reason that we’ve seen some organizations shut down through that process. Now as we look at the artificial intelligence, I think that there’s some really interesting risks here. You bring up a lot of really good points about the state level patchwork and what that ultimately is going to mean because just like we’re dealing with

statute of limitations in different states and jurisdictions. And I think you’re going to start seeing even some of these cities that like to just randomly pass things start to get involved as well. And how do you start creating segregation between those data sets? We’ve already had those challenges across the debt collection industry for years now about dealing with CCPA. And then you’ve got other requirements in Virginia, you’ve got other requirements in New York, and just trying to deal with that patchwork. I don’t see it getting any easier, but maybe someone will build an

tool that will help our industry actually manage that process, who knows. But Nabil, this has been a phenomenal conversation. I can’t thank you enough for coming on and sharing your insights with me.

Nabil Foster (46:43.746)

You

Nabil Foster (46:54.318)

Well, thanks for having me, Adam. You always put together good content, so happy to be part of it.

Adam Parks (46:58.574)

Well, I greatly appreciate that for those of you that are watching. If you have additional questions you’d like to ask Nabila myself, you can leave those in the comments on LinkedIn and YouTube and we’ll be responding to those. Or if you have additional topics you’d like to see us discuss, you can leave those in the comments below as well. And hopefully I’ll be able to get Nabila to come back at least one more time to help me continue to create great content for a great industry. But until next time, Nabila, really thank you so much for all of your insights. This was a fantastic conversation.

Nabil Foster (47:24.21)

Thank you, Adam, and keep on doing the good work you’re doing. also, congratulations again on your recent baby. So, you know, take care of yourself.

Adam Parks (47:33.442)

Thank you so much. I appreciate it. And thank you everybody for watching. We’ll see y’all again soon. Bye everybody.

Nabil Foster (47:39.566)

Bye for now.

Introduction

Did you know that one misstep in your AI deployment could result in a regulatory inquiry lasting nine months and costing hundreds of thousands in legal fees?

In the latest episode of the AI Hub Podcast, host Adam Parks sits down with Nabil Foster of Barron & Newburger to discuss the legal risks of AI in collections. With the rise of chatbots, AI segmentation models, and open-source tools, it’s more important than ever to understand how regulators and plaintiff attorneys are viewing this evolving space.

This episode is essential listening for credit and collections professionals seeking to align innovation with compliance.

Key Insights from the Episode

1. Regulators Are Watching the Front of House

  • AI tools that interact directly with consumers are under increasing scrutiny.
  • Regulators are far more skeptical of consumer-facing tools, especially when they replace human communication with automated responses.
  • Any AI-powered communication must be transparent, traceable, and explainable.
  • Failure to understand how consumer data is used or displayed can result in regulatory backlash.

“They’re asking use case scenarios because they want to know, how does this get rolled out? What does it look like? What is the consumer… what is the consumer faced with? What are the choices? What are the options?” – Nabil Foster

2. Behavioral Data Is Safer Than Demographic Data

  • Using fixed characteristics like zip code or gender in AI models may lead to discriminatory practices or redlining.
  • Regulatory bodies are increasingly aware of implicit biases embedded in algorithms.
  • Behavioral indicators, such as click-through rates, portal logins, and email interactions, are more defensible because they reflect consumer actions rather than protected attributes.
  • The use of behavioral data also enables more personalized, fair, and effective engagement.

“We can look at the Goldman Sachs instance when Goldman Sachs are issuing the Apple card.” – Adam Parks

3. Open-Source AI Has Hidden Vulnerabilities

  • Many companies rely on open-source components in their AI tech stacks, unaware of potential security risks.
  • These components are susceptible to “poison packets”—malicious pieces of code introduced through third-party libraries.
  • A single compromise in a fourth- or fifth-party vendor’s software could create a backdoor to sensitive consumer data.
  • Due diligence in your vendor relationships must go beyond the immediate provider to include their upstream dependencies.

“You’re giving it to your vendor, but what LLMs is your vendor using? So you may have fourth, fifth, sixth party disclosures depending on how deep that application stack runs.” – Adam Parks

4. AI Hallucinations Can Lead to Legal Claims

  • Hallucinations refer to incorrect or fabricated outputs by AI models, especially generative ones.
  • When AI tools make up facts or figures and pass them on to consumers, this can be the basis for a complaint, lawsuit, or regulatory fine.
  • In regulated industries like debt collection, these hallucinations are particularly dangerous if they lead to consumer harm.
  • Proper oversight, validation, and human intervention are essential to prevent these errors.

“You’re responsible for the output of any model that you’re using.” – Adam Parks

Actionable Tips for Managing AI Risk

  • Start with back-of-house applications like segmentation models instead of consumer-facing chat.
  • Request transparency from vendors about their LLM sources, data handling, and model explainability.
  • Invest in cyber insurance that includes data breach and AI-related incidents.
  • Use behavioral data responsibly and always audit for potential bias.
  • Test outputs thoroughly and document your model logic.

Timestamps for Key Moments

  • 03:15How regulators view AI use in collections
  • 06:30Chatbot compliance and front-of-house vs back-of-house risk
  • 09:45The danger of redlining and demographic data misuse
  • 22:30What hallucinations are and how they impact legal risk
  • 36:10AI adoption strategies that avoid litigation and regulatory red flags

Frequently Asked Questions About Legal Risks of AI in Collections

Q1: What are the legal risks of using AI in debt collection?
AI can lead to lawsuits or regulatory actions if it treats consumers unfairly, generates incorrect information, or uses biased data.

Q2: How can agencies reduce their AI risk exposure?
Stick to back-end applications, ensure transparency from vendors, and always involve legal review before launching AI tools.

Q3: Are chatbots safe to use in collections?
Not yet. Chatbots that hallucinate or miscalculate can easily generate complaints. Use pre-approved messaging and maintain human oversight.

Q4: What’s the risk with open-source AI models?
Open-source code can include malicious packets that expose systems to data theft. Vet all dependencies thoroughly.

About Company

Barron & Newburger, P.C.

Founded in 1981, Barron & Newburger, P.C. has a national presence in advising and defending the credit and collection industry, guiding any party that can appear in a bankruptcy case through the intricacies of reorganization and insolvency proceedings.

About The Guest

Nabil Foster

Nabil Foster