How to use AI in debt collection is no longer a future concept — it’s a now strategy. In this eye-opening episode, Adam Parks sits down with John Nokes from National Credit Adjusters to explore real-world applications, repeatable AI workflows for receivables, and the key shifts in debt collection operations powered by artificial intelligence.
Adam Parks (00:07)
Hello everybody, Adam Parks here with another episode of the AI Hub. This is quickly becoming one of my favorite podcasts, not only to record, but to actually listen to because I’m getting an opportunity to learn from some really great people. And for those of you that did not see my Receivables Podcast episode with Mr. John Nokes from
National Credit Adjusters, I’m gonna suggest that you’re gonna wanna check that one out, because that’s what prompted me to get into prompt engineering and start leveraging artificial intelligence across my own organizations. But before we jump into that with John today, let’s get a quick word from our sponsor, Latitude by Genesis.
All right, John, thank you so much for coming back and spending a little bit more time with me here on camera, talking about your artificial intelligence journey and how your journey impacted my journey. Because I think that’s a really interesting part of the process. But for anyone who has not seen the last episode that we did together on Receivables Podcast not had the opportunity to become your friend like I have, could you tell everyone a little bit about yourself and how you got to the seat that you’re in today?
John Nokes (02:09)
Certainly. So it’s great to be back. I’m actually surprised you wanted to talk to me again. It’s great. I love it. It’s a lot of fun. So I’ve been in the receivables industry for, holy cow, 25 years or so. I got in in the late 90s.
After I got out of the Marine Corps, got into it and I’ve been with companies in San Diego and here in Austin and now in Kansas. I actually was a Latitude customer for several years and really enjoyed the product. I have been in technology my whole career and I realized last summer I took a one-day sort of a high-level seminar in AI trying to understand what this is and how to get into it.
And it got me really excited. It got me realizing I don’t know anything about it. But I came out of there saying, wow, I think I have a handle on it. I think I’m going to be able to do it. Then I started doing it. I’m like, OK, this is much harder than I thought it was. And there’s a lot more to it. And it’s.
It’s not, there’s no easy button. There’s no simple thing to do. in the challenge, a challenge I have is when I talk to my executive team, or I talk to other people, everyone has a different version of what AI is. You say AI, it’s like, well, it does this. Well, it could, but not necessarily. You know, you have to, it’s yet to be more specific on what do you want it to do and how do you want it to do it? And there’s just so much out there. It’s, it’s incredible. A lot of fun stuff.
Adam Parks (03:37)
Well, for me, it was our last conversation that really
drove me towards testing this out. So we had recorded back in September of 2024, we released it in October. And I was down in Brazil and I had a little bit of time while I was down there. And I said, man, I better get out in front of this before I get left behind. Like I had really been against it. I was really pushing back internally in my organization in 2023. And even in early 2024 saying, look, we create content like we’re going to create natural original content like that’s our strength. That’s who we are.
hell, I didn’t even know who we were. Because what I’ve learned through this process is that there’s really no…
And there is no easy way to do it. And the more that you can understand it and the more that you can create a repeatable result from an artificial intelligence model, that’s where the success was. Because when I first started playing with it, I was getting all different kinds of responses, even necessarily to the same prompt. And I really had to work on prompt refining and prompt engineering to better understand how those things were going to come together and where that value was really going to be for me from an organizational perspective.
perspective
because it was at the time I looked at ChatGPT as a cool tool. This could be a really fun thing for me to you know, play around with and have a conversation with the computer and whatever. But how do you operationalize it right and you were the first person that I really talked with about some of the things that you were experimenting with. So have you had an opportunity to play with some additional tools? You know, how are you starting to look at that? Or how has that started to shape your mind frame around technology in general?
John Nokes (04:56)
Exactly.
So last time we talked, we were going down a path of, you know, one of the things we want to be able to do is take documents and pull data out of documents, right? We get mail from our consumers and right now we have to have a person read the mail and they look up the account with the address and the name and try to do all this stuff, right? I’m like, well, you know, we could use OCR. OCR is hard though because if it’s not the same form every time, it’s not great.
John Nokes (05:36)
Well, I’m like, well, shoot, let’s use AI and have it read it and do it. my programmer, he’s really good. He built something really quick and the results weren’t very good. And it’s not because it wasn’t a good tool. was, we didn’t know the questions to ask. you know, were we using the right model? We were using an inexpensive model because we’re just playing around testing. So we weren’t spending a lot of money on a model and that makes a difference. And so we have to figure out what the right model is. We have to figure out what
John Nokes (06:03)
what the right prompts to ask? We’re asking very general prompts and sometimes it’s great, sometimes it’s not. And so I realized that, you know, it’s, what are the prompts are really important and I don’t know enough. So, you know, I, I now I’m at a point where the, want my guys to learn it, but I think at this point,
Adam Parks (06:10)
inconsistent.
John Nokes (06:24)
to not get too far behind the curve, we need to figure out vendors we can use to use it to help us move forward. And don’t reinvent the wheel. Let them help us go forward. And then we can add to it and improve upon it. And I think what I’m learning is I didn’t understand the importance of prompting and what the prompts were. if you don’t ask the right prompts, if you do general prompts, you’re to get general answers. And so you have to be, think you have to, at least as far as I know, you have to be specific. And as we’ve talked a
bit earlier and I want to I’d love to hear your details on this is your information on this is how do you progress how do you sort of refine the prompts
Adam Parks (07:01)
It’s specific enough, but not too many things at once. And one of the big things that I’ve learned is that if I take, let’s say for example, that I wanted to create a, I wanted to create a piece of
content, right? Or I want to create an evergreen campaign. I want to take all the videos from National Credit Adjusters and I want to create new social posts from those so that we can continue to reuse that content. That’s a pretty common marketing tactic. We call it evergreen posts. So if I want to create an evergreen campaign and I take all the transcripts from the videos and all the things and I put it together and I put it into one prompt, I’m getting trash. I’m going to get a trash output.
Now on the flip side, if I take one transcript and I take one line at a time, I take one video and I create two posts from it, it’ll be awesome. And so I think one of those things is how much am I asking the machine to do it once?
And if your question includes so many different criteria, it’s not capable of thinking in that way. It’s not like a human where if you give it a more complex question, it’s going to sit there and break that into segments and start to work its way through the problem. The way that you might assign work to somebody on your team.
In this case, it’s a little bit different. You have to ask it one little question. Okay, let me get that response back. But you want to ask the questions in a way in which it can learn from the from the questions that you’re asking and the responses that you’re giving. So I’ll give you an example when I
produce this podcast. I’m going to use artificial intelligence in the riverside.fm app in order to actually produce the podcast itself, clean up the sound, pull out the ums, ahs, right? I’m going to pull out the filler language and things and that’s all going to be from an automated standpoint. Then I’m going to be able to use a handful of keywords and I’m going to be able to create the short videos that I need to really promote this piece of content and make sure that I can get it out to as many people as possible. Well, then I need to go through
the process of actually creating the written content that goes along with it. So using the transcript from this conversation and using a series of prompts, the first thing I’m gonna ask it is for the keywords. Tell me what keywords I should be using to actually draft this. Like let’s start at the very beginning. And I select my keywords. And then I’m gonna put those keywords in and I’m gonna say for the remainder of this discussion or the remainder of this chat thread, this is the primary and this is the secondary keyword that I’m gonna focus on.
Now give me a title, give me 15 titles to choose from and I’ll go in and I’ll pick out those 15 titles. And I’ll work my way through it. But each one of the answers and responses that I get is working in that same chat thread. And I’m improving its responses each time. And that way I’m…
I’m kind of, it’s building on itself and I’m starting to learn from it and kind of keeping those things formatted. Now the challenge starts to become when you get inconsistent responses. Podcast transcript A gives me this, podcast transcript B gives me that. And that’s where the prompt engineering I think really starts to become a major part of it. And so I never ask the AI to give me an answer.
I never say, you know, I’m going to use the you know, we had a baby recently. And as part of the preparation process, I said, Okay, let me evaluate the formulas on the market. Let me try to understand it. And rather than going to a ChatGPT and saying, tell me the best formula. What’s the best formula I should give to my daughter? Right? That’s kind of a general question. It’s rather
broad, it’s very difficult for the AI to come back and provide me with a reasonable response to what’s the best formula. Well, based on what? Right? What criteria are you using to make that decision? So my question to the AI is very different, I would go to ChatGPT and I would say, tell me the 30 criteria that I should use in selecting a formula for a child.
John Nokes (10:41)
Yeah.
Adam Parks (10:52)
giving these concerns. I don’t want a formula that’s created by a pharmaceutical company. I don’t want one that does this. I don’t want, I want it to be organic. I want it to be lactose-based. want, right, so I gave it a couple of like high-level things that I was looking for and then it gave me 30 criteria and they were detailed criteria and then I said, the second question rather than saying, okay, now tell me the best one. Okay, hold on. Pretend you’re a pediatrician and critique my
prompt, and would go through and critique it and provide me with some feedback. Okay, now apply that feedback to my prompt. Update my prompt to account for this feedback. And so okay, so now I got that one in place. Then I’ll say, pretend that you’re a ChatGPT engineer and reorganize the order of operations of this prompt to optimize your response. Okay, well what’s the most important thing in math?
The most important thing is the order of operations. No matter what kind of complicated calculation you’re gonna do, the order of operations is the key to correct math. okay, I look, had no reason to believe this, but my brain said, hey, if the AI is a giant learning algorithm, there’s probably a whole bunch of math in there, let’s make sure the order of operations is correct. Okay, now pretend you’re a ChatGPT engineer and critique your response to my prompt.
John Nokes (11:49)
Yep.
Right.
Adam Parks (12:13)
Why is the response that I’m getting, the response that I’m getting, critique that and tell me how I should improve the prompt to get a more detailed and thorough response from you and continue to do that. Now I’m 30 minutes deep into this, right? Like I’ve asked it a bunch of questions. Now I’m gonna say,
John Nokes (12:27)
right?
you
Adam Parks (12:32)
Here’s the 30 most popular formulas out there. Conduct research and tell me the 30 that I should actually evaluate. Just because it’s the most popular doesn’t mean it’s the right one. So tell me the 30. And then now I narrowed it down to a list of 30. Now I’m going to take the list of 30 criteria and compare the 30 options against that 30 criteria and now give me a response. And you’re gonna get a response and now we all know about hallucination. So we all start going, okay, well is this the real response?
John Nokes (12:43)
Right.
Right.
Right.
Adam Parks (13:02)
Or is this
a hallucination? Okay, now go back again and for each one of the criteria and for each one of the options that you evaluated, show me the source of the information that you used to make that assessment.
And now I’m spending another 30 minutes going through clicking these links and seeing where did it come from? Is this a reputable source? Is it coming from the CDC? Is it coming from their, you know, sales marketing website? Is it coming from a third party? Like, and then I can start honing in on where I what I would consider to be an acceptable source of data. Once I’ve got all that in place, now I can ask it to actually give me the best formula, right? And, and it’s funny, because when I met with the pediatrician,
John Nokes (13:42)
You
Adam Parks (13:45)
And we were talking about formulas and I told her like how I selected she’s like I’ve never heard of this one before. And so she did a whole bunch of research on it from a pediatrician’s perspective, like the actual doctor was so enthralled with the idea of how I had come to this conclusion that she called me back the other day and said,
I have no additional feedback. I have nothing else to say, because I had given her the entire assessment. And so for me, think what I learned through this process is I had to learn how to ask it the right questions. And if I only ask it to give me responses, if I’m not willing to invest in the discussion,
John Nokes (14:08)
Yeah.
Adam Parks (14:22)
The same reason that if you and I are just walking in passing and we talk about AI for 10 seconds, we might get something from it. But when we have these, you know, 45 minute discussions, I get so much more out of it, you get so much more out of it because we’re investing in each other’s knowledge and in that discussion to be able to draw a better conclusion. And I don’t know why we would treat AI as something other than another person that we’re trying to communicate with. Now it’s a
It’s a person who lies, so you gotta watch out. It’s a person who doesn’t always understand what you’re asking, but isn’t that anybody? So if you start thinking about it with that mind frame, I feel like there’s a lot more success that we can draw from these challenges.
John Nokes (15:04)
100 % agree and as you are going through the process and you know, I watched your conversation with with Heath Morgan the other day and and his conversations about you know, his son The being graded on the the quality of the prompts and not necessarily the output it’s making me think you know the What it does is it’s changing how our employees and ourselves how we have to address a problem
John Nokes (15:29)
Right? It’s not saying, okay, let me throw it into Excel and sort of try to play with the numbers and figure it out. It’s really, instead of spending the time of, you know, diving into the data, you’re spending the time and asking the questions and having the process to ask the questions. And so it’s a different skill set that we need, that we need, and we need our employees to have if we’re using AI. Because it’s all around how you ask the question, how you…
logically go through the steps to try eliminate hallucinations, to make sure it’s real data. it’s fascinating because it’s not a five second solution like people think, right? You said 30 minutes here, 30 minutes here, 30 minutes here. It sounds like you probably were three, four, five hours in at minimum of going through questions. so it’s not, hey, AI, what’s the answer? It’s a logical process, which is.
I guess I hadn’t really been thinking about it. I was of the opinion, AI, it’s smart. You ask a question, it gives you the answer. All is good. But it’s not. You have to ask the questions the right way to get the answers. And that’s the skill set that I need to learn, my employees, my people need to learn. heck, schools need to be teaching that. I think I told you this. I had conversations with my mom.
about how she’s saying AI is ruining the world. She didn’t say that. But her concern is kids aren’t learning the same way. They’re not learning the material because they’re using AI to cheat. And I certainly understand that argument. if you don’t go through the process of asking the prompts and learning the prompts and going through the process of the prompting, it’s an easy way out.
John Nokes (17:14)
But if you go the steps, you learn from what you critically think about the question and not just the answer.
Adam Parks (17:24)
The the example that I like to use for that and I remember being in I want to say it was like freshman or sophomore high school and we had a I had an accounting class and we would we’d be writing these physical checks and balancing the ledgers and write like we were doing the actual manual process and I remember one of the girls in the class saying to the teacher like I don’t need any of this QuickBooks does it for me and his response was but do you know what QuickBooks is doing and if it’s doing it right?
Adam Parks (17:53)
And that has stuck with me for, you know, 20 plus years now and saying like, okay, yeah, all right, I can use a tool like QuickBooks to learn these things. But if I’m not learning the underlying pieces, like then I’m reliant on this technology. If it makes a problem, it’s my problem. And in the in episode two, where I talked with Tim Collins about AI ethics, we talked a lot about the human responsibility, like even as a lawyer, he’s got a ChatGPT that he uses to check his contracts and to
provide him with some feedback and things, but it does not absolve him of the fiduciary responsibility to manually review those documents as well. So if you use it as a tool to enhance, now it took me let’s say five hours, let’s say it took me two and a half hours to do that analysis, right, using ChatGPT. Well if I had manually done that research, I could have done all of that manually, it would have taken me 10 hours.
So I probably saved 75 % of my time roughly by leveraging the prompting and things. if I did, when I went back to double check the sources, I got some bullshit. I had to go back through it. It had some false responses. It also, because one of my criteria was that the formula company could not be owned by a pharmaceutical company. It only looked at who owns this or who’s producing it, not who owns.
John Nokes (18:50)
Yeah.
Right.
Adam Parks (19:11)
the company that’s producing it, right? So it’s not going up two, three layers to try and understand who actually is under control of it. But I’m not comfortable providing a formula that’s being made by a pharmaceutical company. I feel like that’s a poor choice as a parent on my part. And maybe that’s just me being paranoid, but hey, that’s maybe why I was so slow to the game to start with AI to begin with. But you know, now that I’m
John Nokes (19:11)
Producer. Yeah.
Right.
Right.
Adam Parks (19:36)
using it on a daily basis, now that I’m actually engaging with it and I’ve changed my mind frame on how to look at it. Sometimes I look at it as a shortcut tool. And what I mean by that, this was also prompted by your conversation about copilot and experimenting with copilot in the Microsoft environment. Look, I can sit down and I can take two spreadsheets and I can compare them.
John Nokes (19:46)
room.
Right.
Adam Parks (19:58)
Or
I can plug them into ChatGPT and I can give it a really quick response and I can do it a lot faster using something like a ChatGPT to get that response back in those comparisons and to get a let’s say exceptions report produced very, very quickly. Now I’m not plugging in consumer data right like my world is a little bit different these days. So it’s
John Nokes (20:15)
Right.
Adam Parks (20:17)
in my own model. But you would also talk something earlier about the cost of the models. And using a less expensive model versus a more expensive model. And I wanted to address something there too, because this is an interesting learning point for me recently. And that’s that I pay the $200 a month for the ChatGPT Pro for myself.
Because I want access to 4.5. want priority access to my responses. I want to be prioritized. I’m willing to pay that fee. I get much better responses from that when I’m leveraging it in that way than I do from the $20 a month version. Now my team, most of my team is using the $20 a month version because that’s what makes sense for their use cases. But if I want to do research papers and I want it to do deep level research for me and things like that,
Sometimes you got to pay the piper. But if you consider the amount of time that I saved for $200 on a monthly basis, I don’t think you can compare that. I mean, my time is worth more than that on an hourly basis. So if I save one hour a month, I’ve already paid for myself. But I’m saving a lot more than that.
John Nokes (21:04)
Right.
Well, you know, I think you’re right. So AI can be a time saver, right? So if you know how to ask the questions, you can do, like you said, a lot more research in a smaller amount of time. so even though it, when you say, shoot, you’re five hours, that’s a big chunk of someone’s day doing something. But if it would have taken you 10 hours, you’re actually saving time. And so AI is…
It’s not the answer, it’s a tool to be efficient. It’s efficiency tools what it is. And that’s how people, think, that’s all I’m trying to look at. And I think that’s the right way of addressing it right now. you know, it’s, and you know, the other thing you were saying, you know, talking to Tim Collins is, I think you know, I’m sure you know this and most people probably know the story of the lawyer that submitted the brief that was hallucinated by AI and he got debarred.
And you know, so just because you use AI and gives you an answer doesn’t absolve you from the fact of making sure it’s right. And so part of the…
Process has to be okay. Let’s validate this and is it telling me the is it telling me right stuff? What are the sources? Can I go check them? Is it where my prompts right? Did I go only do I need to go three layers deep and who owns the company not one layer deep and that’s all about the learning and it’s just The more I talk about with you and with others and read things the more I realize I don’t know anything and I am I’m still it’s a huge
It’s a huge… Yeah.
Adam Parks (22:43)
John, you’re not alone at all, right? Like first
you’re not alone, because I don’t think any, I don’t really know anything either. All I learned over 200 hours of prompt engineering was how to order of operations my prompts. And I don’t even know that that’s truly optimized. That’s just a path that I found that worked for me in my use cases. I’m not gonna stand here and say that that’s how everybody should use ChatGPT, but I think that there’s…
John Nokes (23:03)
Right.
Adam Parks (23:09)
I think there’s some standardization that you have to find in yourself in order to create a repeatable response and result.
John Nokes (23:17)
Right. And I, one of our vendors, they came out with a new AI module for their tool and the tool it uses, there’s a ton of data there. And so I’m like, they go, look, you can use it for two weeks. And Jack Mahoney and I, we used it for two weeks trying to figure out, you know, how to, if it’s worth the extra money for it and this and that. And the answers it got were not very good.
And now, talking with this conversation, I’m like, I’m not sure I gave it a fair shake. I’m not sure I asked the questions correctly to really understand what to expect. And so thinking about it, I’m like, I’d like to go back and do it again. And can I ask my questions differently? Will it let me chain the questions to refine the results and get better answers? When I was testing, I was.
one question if it’s not right it’s no good I realized that was faulty testing not faulty product.
Adam Parks (24:13)
I think that’s a really insightful, look, first of all, it’s very hard to see that kind of a weakness in our own processes or how we’ve approached something. I mean, if nothing else, I would say you’re well ahead of the curve and even being able to identify the fact that like, I don’t, I never sat down and expected it to be good. To be honest, I sat down the first time to be able to prove to myself that like this wasn’t something that I needed.
That that was I kind of sat down with not necessarily how am going to make this work? But how can I prove that this doesn’t work for my purposes and that it’s not going to be good? And my first tests were trash. And that’s why I say it took me 200 hours of actually spending time at the machine at my computer on my phone, testing, testing, testing in part and selfishly. One of my driving factors was that my wife was pregnant. And I said, I’m going to work
Now, not when the baby’s here. I wanna spend the time now and I need to find better ways to do my job at the same or better quality level.
with less spending less time so that I can spend that time with my family. And that was kind of my objective. But at the same time, I really didn’t think it was going to do what I needed it to do. And it took me a long time. Look, before I could actually produce content related to a podcast in a consistent format that met the quality expectations, you know, I’ve got probably 25 prompts that I run to actually produce something. Each one of those prompts took me
John Nokes (25:27)
Right.
Adam Parks (25:46)
12 hours and sometimes I’d work on something for six to 12 hours and then be like, this is trash. me throw this away and let me start over again. but I would go every time that I did that, I went back with a different mind frame of how I was going to approach it. did look the definition of insanity is doing the same thing over and over again and expecting a different result. So I tried not to do that and drive myself insane, but I constantly had to go back and say like, what did I do wrong in this process? Am I my included?
John Nokes (25:51)
You
Right.
Adam Parks (26:15)
too much? Am I can it not see or understand the information that I’m providing it because if I tell it to you know write a an article based on this podcast and I don’t give it anything more specific than that I’m gonna get trash in response right I have to be really isolated specific and I have to look at the individual criteria and my expectation and the word counts and the content diversity and the quotes and almost every time it produces something the quote isn’t real.
John Nokes (26:41)
Right.
Adam Parks (26:41)
And
so have to, every time I see a quote, have to say, are you sure this quote was found in the transcript? Like you’re 100 % sure. And most of the time it has to go back and rephrase that quote to find something from the actual transcript that enables it. So I still get a lot of false statements back from it.
but you start kind of learning it and then you start kind of learning what kind of content it creates, the formats that it uses, and now I can’t go on LinkedIn without pointing out the 25 posts that I just saw that were all AI generated because they were about nothing. And that’s the really hard part.
John Nokes (27:15)
So
being an inherently lazy person, my first question was, are there tools that you use or that you know of to help with the prompt process? even are there AIs that you can give it the criteria for the prompt order and it will build the prompts for you? Or is that not quite here?
Adam Parks (27:40)
So I think that there are some tools like that. I found for me that it was a, it’s a Google sheet. It’s not even a Google sheet, it’s a Google doc. And so what I’ll do is version one of the prompt.
Right. And then I work through that prompt and like, then when I make it version two, I’m constantly copying it out of the AI and into this sheet so that I can kind of track and watch. And then it also slows me down for a minute because I can see it in a formatted way and it be like, well, did I really, is this the criteria that I’m looking for? And so I’m constantly going back, not just through the AI, but actually looking at that prompt. Because one of the things that I’ve, I’ve gotten very frustrated with is like, if I was trying to write an article, for example, and I tell it to make like one change and then it changes the whole fricking article.
John Nokes (27:55)
Okay.
Okay.
Adam Parks (28:20)
Now I’m all like, why did you do that? Like that’s not what I was looking for. I’m only looking for bite size pieces. And so that’s kind of frustrating for me is when it goes off the rails. I’ve also found that sometimes, and this is gonna sound a little crazy in open AI, I probably won’t appreciate this, but I do find that sometimes when it’s not working for me, I just need to step away for a few hours. And I can ask it the exact same thing the next day and get the response that I expected.
John Nokes (28:22)
Right.
Right.
Adam Parks (28:45)
So sometimes I know that there’s some, you know, some changes to the model or deprioritizations or the model’s not reasoning today or whatever. So that has been a little bit of a challenge for me. And that was one of the things that prompted me to pay the 200 over the 20 because I was like, look, I want to be prioritized. I want my responses to be as fast as possible because time is money and I want to make sure that my responses are being prioritized.
John Nokes (28:50)
Yeah.
Yeah.
Well, that’s the non-repeatability has for, it depends on the use case, that can be a problem, right? And so I don’t know how that gets resolved because if I want to take a call transcript or agents notes and do a summary on it.
Adam Parks (29:15)
100%.
John Nokes (29:24)
If I do it today and I do it tomorrow and it’s different summary and it’s significantly different, that’s a problem, right? Because if I, a regulator, the CFPB comes in and says, me something and you can’t do the same thing every time, it’s not good. Yeah.
Adam Parks (29:36)
It’s the specificity of the prompt, right? So in that case,
it’s the specificity of the prompt that you’re asking for itself. So it becomes like a, is it specific enough? So if I’m at, so like, let’s say that I was trying to summarize a collector note, right? If I’m not giving it the language that it should use, it might make up new language. So I want to make sure that it’s got a defined definition of, you know,
the call has ended or that this was a right party contact or whatever language is being used within those notes, you want to make sure that you’re defining it. It’s the same reason that if I were to go through and I were to, say, start creating a social post about this episode right now, and I just say John Nokes, and let’s say there’s 50 John Nokes on LinkedIn, it’s just going to go LinkedIn slash John Nokes, right? Unless I tell it at the beginning of that prompt that this is the John Nokes that I’m looking for at this particular URL, I’m not
John Nokes (30:14)
Right.
right.
Adam Parks (30:28)
you’re gonna necessarily get that. So I think it is about the specificity of that specific prompt that you’re using to engineer it. Now, a lot of times the way that I’ll do it is I’ll say, if I see something that comes back, and generally it’s gonna be a formatting issue, right? Like whether it’s gonna provide me the same response, but they’re gonna be in two different formats. And then my immediate response will be, please review all of the other.
John Nokes (30:28)
Right? Right.
Adam Parks (30:50)
chats in this project and make sure that the all of your responses in this thread match the formatting that I use in other areas and then it’ll go back and it’ll do that. So sometimes it does require that extra step but that’s the human in the loop part of the process that I think is just not not something that you can ignore right now from an artificial intelligence standpoint.
John Nokes (31:11)
Yeah, well that’s I I say that to my my co-workers, you know, they want to have they want to use AI to automate all these processes. I’m like, well, I don’t think I’m not comfortable with AI.
reading a thousand documents, right, and getting the name off of each document correctly and finding it and so we can making sure it’s the right document to send to a consumer without someone visually checking and saying, okay, I wanted to send it to Mary, it’s Mary’s document, it really is the transaction history, yes, we can send it. You can use AI to get everything together and have a nice little…
list you go through and do all the busy work but you still need I think you still need eyes on it I’m not comfortable enough not having eyes on it.
Adam Parks (31:55)
Well, that’s a confidence level. And until the system has proven that it’s 100 % effective 100 % of the time, you still need that human, right? Like it’s just going to be part of the process. But I think that’s part of, know, yeah, that’s a 2025 issue. Is it going to be a 2030 issue?
John Nokes (32:04)
Right. Right.
Right, right. Well, you know, one of the challenges we’ve talked about, and you probably have talked about this in other podcasts, you so if you use AI to, let’s say, either voice AI or chat AI talking to consumers, and the AI hallucinates, and all of sudden it says something it shouldn’t say.
Adam Parks (32:16)
Maybe not.
John Nokes (32:37)
even though it’s to one consumer, is that now a class action because you’re using this automated tool to talk to thousands of consumers. And so if some lawyer can say, that’s now a class action. So instead of a thousand bucks, it’s going to be a hundred million bucks.
Adam Parks (32:52)
I don’t disagree with that statement. And it’s I think that’s part of the journey that we’re all on right now as an industry and saying
how can we start to use this? I did a webinar with Cris Bjelajac from Latitude by Genesis, where we talked about the six use cases of artificial intelligence in the debt collection space. And we kind of walked through each one of those and talked about how those things will come together. And I think some of those are closer to real rollouts than others. I’ve seen a lot of I’m gonna call it bogus AI technology that was not genera claims to be generative AI. It’s not it’s if and then statements and things of that nature.
For now, I feel like the human in the loop process is just gonna be the way that it is in the short term. It’s going to be something that we have to have in order to feel comfortable with it. I would not, under any circumstances, take this, plug it in, run it through the series of prompts without looking at it, and publish it.
John Nokes (33:38)
Yeah.
Right.
Adam Parks (33:49)
Because I’ve never had one go perfectly every time exactly the way that I had it in my mind, the right topics, the right everything, right? The right breakdown. So I think it’s a great tool to help us get to that point. But I can take a podcast that used to take me three days and now I can do it in three hours. I would say that’s a pretty significant lift that increases my capabilities. But we’ve looked at AI, I think a little bit differently than other organizations. And I think a lot of people look at it they say, how much bigger can I make my company if I use AI?
John Nokes (33:52)
Right.
Yeah.
Right.
Adam Parks (34:20)
I’m taking a different approach. To me, it’s how much better of a service can I provide to my clients?
for the same dollars.
John Nokes (34:25)
Yeah, well, it depends on the type of business, right? And there’s a quality and there’s a quantity. You’re a quality company. There’s quantity companies. And cost to collect’s a big value in collections. And so can AI help your cost to collect?
Adam Parks (34:29)
100%.
Big.
John Nokes (34:43)
Right? you know, I think I need to start thinking about AI as a team of interns. Right? You use a team of interns to get all your data together, but you have to look at it and make sure it’s right because they’re interns. don’t, you they only know so much.
Adam Parks (34:50)
That’s a great analogy.
John, I think that might be the quote that I walk out of here with today seriously, like look at AI like a group of interns in your organization. It’s an extra set of hands, they can be helpful, they can save you time, but they can’t go file in legal briefs for you. Like there’s limitations to what you would enable an intern to do within your organization. I think that training the people in your organization on the rights,
John Nokes (35:00)
You
Adam Parks (35:23)
that the right and wrong ways to use it, the things that they can and can’t do with it, what they can and can’t access, which models they can use and which boxes those models should be in. And then I did an episode with with Sarah Wagerman talking specifically about how are you going to audit your collection agencies that are using it? How are the banks going to audit you? How are the regulators going to audit you? And I think the number one thing that I pulled from that podcast was Sarah was talking about privacy impact statements.
John Nokes (35:33)
Right.
Yes, exactly. Yes.
Adam Parks (35:52)
or privacy impact assessments in trying to understand, know, okay, is this, so if I’m using an AI tool, does that mean that the data is going to a third party? If it’s going to a third party, who’s in your application stack? Is it going to a fourth, fifth, sixth party? How deep does the rabbit hole go and how deep am I willing to let it go where I’m still comfortable?
John Nokes (36:10)
Yeah, I’m talking to a company to look at our documents stuff and you know, they’re out of Singapore. I’m like, okay, is that okay? Where’s the model? You know, they have a US company, a US base, but you know, if I were to go to a US company, I, unless I ask and they tell me they could be using teams in Singapore to do the work. So, you know, it’s sort of like,
Adam Parks (36:20)
Where’s the model?
could also be in Singapore. Correct.
John Nokes (36:36)
You know, you have to get comfortable with, you have to ask the right questions, get comfortable with that you know where the data is and who’s touching it. you know, are they intermixing your data with other collection companies’ If it’s collection stuff, right? To try to build their models. How is your data being used to build the model?
Adam Parks (36:53)
Well, we all hear the the horror stories of Samsung, right? And they plugged in a whole bunch of proprietary information that ain’t proprietary no mo. Like that’s now part of the model. And we can all go in there and query it. So who has control of which data and I I’ve said this, I think on almost every podcast now, but like if you’re not paying for the product, you are the product, just like Gmail. Like Google reads all of your emails, if you’ve got a free Gmail account, and they use that to serve you ads.
John Nokes (37:00)
right.
That’s right. Yeah.
Adam Parks (37:21)
because you’re the product. If you’re not paying for that service, somebody’s paying for that service, and in that case, it’s the advertiser, so be very careful about where you want to feed your data.
John Nokes (37:27)
right.
That’s
right. And that’s another reason to pay for your chat, CBT, whatever, right? If you pay for it and your data can be segmented, then I have more comfort than that with that. Then it’s used for everything.
Adam Parks (37:34)
There’s two statements at the bottom of my chat of my paid ChatGPT, right? One is that like ChatGPT may be wrong, like verify everything. And it specifically says that the data you know, the data that is collected through this, you know, account is not used to train the general model.
Adam Parks (37:59)
That’s not the exact language. I don’t have it in front of me at the moment, but that’s generally the language on it. it always every time I’m writing a prompt, I’m like, are you hallucinating? And hey, at least my data is just my data. And open AI, I feel like has probably earned the a little bit of trust in that they’re actually doing things the right way. But then you had something like deep seat came out.
John Nokes (38:00)
Right? Yeah.
right.
Adam Parks (38:17)
Right. And like, that’s going to be the greatest and latest model for like, what was that like a day and a half that that was like the latest and greatest before we realized that it had no safeguards in place and all of the data was being commingled and nothing is actually private. And, you know, it’s, it’s really just, everything is being fed to the Chinese government.
John Nokes (38:30)
Yeah.
Yeah.
Adam Parks (38:34)
And so like I’m really careful about the next tool that I’m going to use, right? Like Grok from Twitter. I’m going to start to play with that a little bit, but I’m not at the point now where I’m ready to load major data sets to it. I am interested in trying some of these other models and I’m not going to say that ChatGPT is the be all end all of any of it because clearly we’re just going to continue to see more technology come over the top of it. But data storage is going to start to become an issue. Where is our data being stored? How is it being stored?
So many privacy issues come to the surface and especially for debt collectors, right? With the lack of federal oversight from the CFPB these days, the states are going to start coming in and then how qualified are they to really be evaluating this to begin with?
John Nokes (39:02)
Yeah.
Right.
Right.
Adam Parks (39:13)
What kind
of tools do they have? What kind of background do they have? So I feel like there’s going to be some challenges there. know Arm AI is trying to address the use of artificial intelligence within the deck collection industry and including at least some level of standards related to the technology and topic itself. But even at that, unless you’re talking about specific use cases, it’s going to be really hard to put any kind of arms around how AI is actually being used in the space.
John Nokes (39:16)
Yeah.
Absolutely. Whenever I talk to you, I always come away with more questions than answers. And it’s great. It’s the way it needs to be, right? This is a new product, a new thought process. And we’ve to keep evolving.
Adam Parks (39:45)
I think that’s the point of a podcast,
John Nokes (39:57)
This is sort of the next big step to figure out how to do this. And there’s just so many questions. It can do so many things. There’s just so many questions.
Adam Parks (40:06)
There’s a lot of questions, John, but I have to say thank you for putting me on the path.
I remember the moment when my friends got me into crypto and you were that moment that really got me driving down the path of trying the AI. And it was your honesty in our last discussion of like, I don’t really know what I’m doing, but here’s the baby steps that I’m taking. Here’s some of the tests that I’m running. Here’s the results that I’ve seen. For me, that was an empowering conversation because it made me feel like, all right, like, hey, I don’t have to know it all in order to do this. I don’t have to be an expert in it for me to go dip my toe in and try. And you kind of put me on that path that has
has really had a dramatic impact not only on my business but on my life. So I appreciate that and I hope that you will continue to participate with me as we continue to both go down our unique AI journeys and we’ll just continue to come back together and talk about what we’ve learned and how these things are going and I hope that you got some. I know we flipped the script here a little bit today right with you asking more of the questions and me kind of sharing more of my experience but I appreciate that because these are some things that I think are
industry can really benefit from.
John Nokes (41:11)
Absolutely. I enjoy speaking with you. I enjoy doing this and I’ve learned a lot. I wanted to take notes but I had stopped myself from taking notes because I’m to go back and listen to it and then take the notes. I’m like, okay, this is what we need to do.
Adam Parks (41:25)
I’ll send you the transcript with the action items. I got your back, John, I appreciate you. And thank you so much for coming on and sharing your insights, being a little bit vulnerable with me. Like I really do appreciate that. I think that’s the kind of content this industry needs.
John Nokes (41:29)
Awesome, awesome.
Thank you. Thanks for having me.
Adam Parks (41:41)
For those of you that are watching, you have additional questions you’d to ask John or myself, you can leave those in the comments on LinkedIn and YouTube and we’ll be responding to those. Or if you have additional topics you’d to see us discuss, prompts you want to hear us talk about, or any of that, you can leave those in the comments below as well. And hopefully I’ll get John back here at least one more time, helping me create great content for a great industry. But until next time, John, thank you so much for coming on and participating with me today. I really do appreciate it.
John Nokes (42:04)
Thank you very much.
Adam Parks (42:06)
And thank you everybody for watching. appreciate your time and attention. We’ll see y’all again soon. Bye everybody.
Introduction
Did you know that AI tools can reduce debt collection processing time by up to 70%? That kind of efficiency is no longer a future dream—it’s happening now. In this powerful episode of the AI Hub Podcast, Adam Parks interviews John Nokes from National Credit Adjusters to explore how to use AI in debt collection without sacrificing compliance or accuracy.
As the debt recovery industry faces growing pressure to innovate while remaining compliant, this episode highlights how AI is shaping new, smarter workflows. From managing hallucinations to training staff on prompt engineering, John shares how his team is navigating the digital transformation.
Key Insights from the Episode
The Importance of Prompt Engineering
- Vague prompts create vague results.
- Structured, single-objective prompts yield reliable outputs.
- Iterative refinement of prompts boosts AI value over time.
“It’s all about how you ask the question. That’s the new skill set our teams need.” — John Nokes
AI is Not a Magic Button
- There’s no plug-and-play AI solution.
- Initial tests often fail due to poor input design.
- AI should be viewed as a supportive intern—capable, but needing supervision.
“AI isn’t magic—it’s math. It only works if you structure it right.” — Adam Parks
Human-in-the-Loop Is Critical
- AI outputs must be reviewed by compliance-trained personnel.
- Hallucinations still occur and can lead to major legal issues.
“You still need eyes on it. I’m not comfortable sending documents without a human check.” — John Nokes
Cost vs. Accuracy: Investing in Better Models
- Higher-tier models like GPT-4.5 provide significantly better results.
- Time saved often outweighs subscription costs.
“Saving an hour a month pays for the upgrade. But I’m saving 10.” — Adam Parks
Actionable Tips for Debt Collection Firms
- Start Small: Implement AI in low-risk areas like data cleanup or evergreen content creation.
- Refine Prompts: Build a prompt playbook based on use-case testing.
- Train Internally: Upskill staff to understand prompt logic and review AI outputs.
- Vet Vendors: Ask privacy and compliance questions before deploying external tools.
Timestamps
- 00:00 – Why AI is changing everything for debt collectors
- 03:15 – What John learned from early AI testing at NCA
- 07:02 – Building prompt engineering skills and repeatable workflows
- 14:30 – Human-in-the-loop: Ensuring compliance and trust
- 21:00 – AI hallucinations and the importance of validation
- 32:00 – Privacy risks and vendor evaluation tips
Frequently Asked Questions About Using AI in Debt Collection
Q1: How can I start using AI in my collection agency?
Begin with support functions—summarizing call notes, automating templates, or generating campaign content.
Q2: What are repeatable AI workflows?
They involve defined inputs, optimized prompts, consistent output formats, and a human QA step.
Q3: Can AI really reduce compliance risk?
Yes—when paired with clear parameters and human review, AI can improve documentation and reduce manual errors.
Q4: How do I avoid hallucinations in AI tools?
Be specific in your prompts, break complex tasks into steps, and verify outputs with credible sources.
About Company
National Credit Adjusters
National Credit Adjusters, LLC specializes in purchasing and servicing distressed and non-performing consumer accounts receivables. Our services are rooted in our company’s mission to bring integrity, professionalism, and the highest standards of compliance to debt servicing. With a strong focus on the customer experience, NCA has built trust and established long-term relationships with creditors that have enabled our continued mutual success.