Is AI transforming financial services for the better? In this episode of AI Hub, Tim Collins from InDebted joins us to break down the ethical dilemmas of AI, the role of human oversight, and how consumer trust is reshaped in the financial sector.
Adam Parks (00:07)
Hello everybody, Adam Parks here with the second episode of the AI Hub podcast. This is quickly becoming some of my favorite discussions. Artificial intelligence is just such a hot button topic across the entire world, but definitely within the debt collection industry as well. And today I’m joined with, I would say, an AI debt collection legend, definitely a man who has been on the forefront of everything.
artificial intelligence in our industry, Mr. Tim Collins. How you doing today, Tim?
Tim Collins (00:37)
Awesome. Thank you for having me. It’s great to be here.
Adam Parks (00:40)
I really do appreciate you coming on and sharing your insights. had a really interesting conversation in the first episode that went way off of the rails of where I expected. And I’m hoping we can accomplish the same thing today. But before we jump into a conversation today about AI ethics, let’s hear from our sponsor, Latitude by Genesis, who’s sponsoring our podcast.
All right, Tim, very excited to have you here and to get this conversation rolling. AI ethics is such a broad topic for us to cover in like 40 minutes, but we’re going to do our best as we were prepping for the call. You had made a comment about an article stating that there would be more AI started businesses than maybe human.
started businesses at some point in the future, which I think raises a number of ethical dilemmas that might be a good basis for this conversation. So could you tell everybody a little bit about a background on kind of what you saw in that article and what it made you think about?
Tim Collins (02:41)
Yes, and thanks again, Adam, for having me here today. It really talked about the five stages of AI, And you could really see that AI is a feature. So you have software already and people are starting to add in features to software already. And then AI is a product. So this could be a standalone chat bot or something that you’re using. But a lot of times people, especially in the debt collection agencies, are now starting to add that AI into their software, like Latitude.
You could API that in by way of example. Then you have AI as a system, which is an AI driven workflow. So at InDebted, that’s what we have. So accounts come in, they get, they go through that whole, you know, scrubbing process. And then there’s this workflow that happens. An email goes out, does the customer open it, not open it? What content do you send them? When do you send them? What channel do you use? It becomes very, very complex. It’s not in the old days. had, we send a letter and then we call it.
Right? Those were the only variables we had to play with. And today it’s more all those different variables. It’s overly complex. AI as an agent is kind of the phase that we’re in right now. You’re starting to hear a lot about, you know, that you can get an agent that will do something for you. And that could be a collections agent. That could be somebody that’s helping with the underwriting in loans. It could be medical, you know, especially in front screening, you know, as you come in, please fill out this form. What are your symptoms?
So there’s all these AI agents that are really coming in. And so that’s where we’re seeing a lot of concern already. But if you leap ahead to the next one, which is AI is an autonomous organization inside of itself. So what I mean by that, it’s ability to manage the whole business process. you could have, if you picture it this way, this is the article I read. It was a three-person unicorn.
So a unicorn means a hundred million in revenue and you have three people. That’s it. That are, that are running it. And part of the, the thing that was really a far stretch in there is that these companies are going to be AI first companies. Like you had your InDebted, which is a digital first kind of company. And now we’re, putting on AI. Now you’re going to start seeing these companies that are like, they’re AI from the very beginning. They’ve built everything on using whether that’s accounting software, that’s AI backed or whether that’s.
voice AI, all of that stuff. And they’re moving sort of from this, what I call co-pilot to autopilot, right? There’s just, there’s no human here. Yeah, there’s no human here who’s doing the coding and looking at the coding or putting in the props, all of that stuff that’s happening already. So if you took it to extreme, the AI is going to generate the idea. It’s going to say, Hey, I’ve got this product idea. It’s then going to code it. It’s going to create that MVP and then it’s going to test it.
Adam Parks (05:07)
It’s a good analogy.
Tim Collins (05:28)
And then it’s going to tweak it. So it’s going to AB test at scale because you’d be able to go back and forth. It’s going to see how it’s picked up and used. And then once it finds product fit, it will start to go into, okay, now we need to do the, all, you know, the, the, the back office functions. I need to get my legal AI to draft my contract, my MSA, my SOW. I need my accounting folks. As the revenue starts to come in, I need sales, I need marketing, and all of that will be AI driven. And the premise of this article is that in the distant future,
not so far, we will have as many AI companies, if not more than we do human companies, because it’s scalable. Now, when does that happen? What are the ethical dilemmas of that? know, all of that, we can create products that we don’t need. We do that now as humans, but we could do it on massive scale. So if you think about it, it’s like, that opens up a whole host of ethical
concerns and considerations to be thinking of as we go down that path.
Adam Parks (06:30)
Well, one
of the things that comes to mind right away, is how does the workforce affected through this type of a change? And you generally have kind of two camps at this point, you have those that are embracing AI and considering like, maybe I can get a second job now, because I can do everything in half a day. And they’re really starting to embrace it. And then you have those that are really against it, like I’m not going to touch it. But in the TransUnion 2024 Debt Collection Industry Report, we saw
40 % shift in the number of companies that in 2023 said I’ll never touch AI in 2024. They’re exploring it. I mean, that’s a 40 % shift is pretty significant. Now granted that was it’s not the majority of the industry. And I think a lot of organizations are still a little afraid. Similar to text messaging and email, I imagine that the lack of clarity around the regulatory environment as it relates to artificial intelligence is part of that. The first challenge being the black box.
challenge the CFPB coming out saying hey, it’s got to be explainable. You have to be able to explain everything that’s happening here. And I don’t disagree with that. You’ve got to be able to determine that you’re not adversely treating one subset of consumers over another subset of consumers. I can understand that. But we’ve been afraid of sending text messages and emails for such a long time. It’s a pretty basic functionality.
Do you think that there’s just a fear level for the artificial intelligence, or what do you think is kind of holding back that use case scenario?
Tim Collins (08:02)
Well, I think there’s a couple of things there. One, the use of AI and sort of, as we talked about those stages has happened so quickly, And so that technological shift has happened in such a very short period of time that it’s difficult for even cutting edge companies to stay on top of it, Just because it’s changing all the time. went from large language models that are, are very generic and can answer a bunch of questions to now these specialized models.
And then you have this DeepSeek comes out and does it on a much different level. And there’s a bunch of ethics stuff that we can talk about that because their latest story is they have no security. Yeah, they have no security whatsoever. It will create all the harm. It doesn’t have any of the blocks. So you can ask it anything, how to create a bomb. It will give you the whole thing. Yeah, it’s so from an ethics perspective, these things have moved so quickly.
Adam Parks (08:37)
Where’s the data? Yeah.
Tim Collins (08:54)
But what you talk about, think, is fundamental. And that’s that transparency. It’s understanding the black box. And somebody said to me at one point, in any of these things that we do, whether that’s sending an email or a text message, calling a computer, we should be able to explain it to grandma. And she should be able to understand it. And I do think the beauty of some of the AI stuff is today it’s a lot easier for us to take the outputs that have happened.
And we can feed that back into the AI and say, Hey, is there an issue here? So you can have almost this AI watching AI, if you will, to be able to tell you, Hey, do we think there’s any issues? So you can create that compliance firewall or band or whatever that is. And you can start running everything through it to make sure that you don’t have those issues. Cause everybody’s afraid that cause AI can do it at scale, If you want to spin up a hundred thousand, can. Yes. Yes.
Adam Parks (09:41)
Well, similar listening to call recordings.
You can’t listen to all of the call recordings with humans. You can have the AI listen to the people on the calls and listen to the manual exceptions, but it still requires you to spot check. So if you’re only able to listen to so many, the AI can listen to all of them and will identify problem situations. But you still have to spot check even beyond those reported exceptions.
Tim Collins (09:45)
Five.
Yes.
Absolutely, absolutely. even if you imagine if you had voice AI that had a leeway to kind of create its own conversation, it wasn’t 100 % scripted, you would have AI like Sedric.ai’s one of those players that’s out there. There’s a few other players out there. They would watch the AI to make sure that it didn’t go off the rails or wasn’t hallucination and all that stuff. But somebody’s still got to be looking at that other software and spot checking to make sure that it, and so back to your point.
humans aren’t going anywhere. They’re gonna still have to stay in the process because unfortunately AI was created by humans. And so it’s perfectly flawed in every way.
Adam Parks (10:50)
Well,
humans are going to have bias and I think the for me what drove it home, the bias situation, what really provided me personally a parallel that I could understand was Apple card. The debacle between Apple card and Goldman Sachs when they first rolled it out. If I go and got the Apple, let’s pretend for a minute, my wife and I live in the same home, all the same things. Let’s assume we have the same financial standing and everything else. She’d get half the credit score that I would get or half the credit limit that I would get.
Tim Collins (10:53)
Yes! That’s another, yeah.
Yes. Yes.
Adam Parks (11:20)
when applying for that card granted that was caught really early and applied but wasn’t that. least it’s the first time I can remember somebody standing up and saying we’re using AI underwriting.
Tim Collins (11:30)
Right, right. And I think you make a couple of great points in there. One is like, whoever builds the model, whatever data they use is what that model, is what that AI is going to do. And so in the past, we never really thought about it. We took everything on the internet, which 95 % of it is crap. we’ve trained these AI models on crap, right? And sometimes that output is going to create. if it’s created,
using this data, like it was created for the Goldman Sachs example that you give it, the outcome is predetermined. And what really needed to happen was, and to your point, they caught it early because they started to see this. Now they got some complaints and they didn’t catch it early enough. But that might’ve been where AI sitting on top of AI could have said, look, we are seeing disparate, you know, there’s, there’s some treatment that’s happening here, disparate treatment that’s happening that shouldn’t be because women are 53 % of the population.
yet they’re not getting the same credit amount. And we’ll see that time and time again. So you talk about transparency to be able to explain it, bias to be able to understand what are the biases that are built into it because a human actually built it. And so you can fix that with data, but you can also fix that with looking at those outcomes and seeing what they are and testing those outcomes to make sure that they don’t have it. And you can test it, scale very, very quickly. So, which is what we couldn’t do before.
Adam Parks (12:57)
So when we talk about the hallucinations as an example, and as I’ve gone through a couple hundred hours of prompt engineering over the past few months, really trying to dig in both in preparing for this podcast, but also just kind of being a nerd and having some fun. know, the hallucinations are real. I saw plenty of them. I see them on the regular as I’m using ChatGPT and other tool sets. Who do you think is ultimately responsible for the hallucinations?
I feel like this is an ethical dilemma. it, if somebody in your office turns something in with a hallucination in it, is it their responsibility or now it’s ChatGPT’s responsibility? It doesn’t sound right.
Tim Collins (13:24)
Yeah, it is, it’s who I am.
No, it doesn’t, but I think that, you know, if we come back to the ChatGPTs are great for a general, you know, you could throw in there and ask them questions, you could do a recipe, but there were even hallucinations on recipes that weren’t good or even maybe slightly toxic to humans. And so I do think that there’s always that element of looking at it, but I think today’s day and age, you can become very, very specific on instead of a large language model using a small.
language model, right? Where you’re using very specific data to drive a very specific outcome. Like if you wanted to create a recipe, ChatGPT then you could grab all the recipes, you know, that you know, and put them into a ChatGPT and you could say, look, and you can work off of these, but they can’t be, and you could put in all that stuff and you could build it specifically where hallucinations happen the broader you are. So the ChatGPTs that are out there, the Anthropics and with Claude,
You know, even GROK, G-R-O-K, the Twitter, excuse me, X version. They’re very broad. And so it gives you a kind of a broad perspective. But what we’re seeing with that AI agent today is we’re very, very specific. I could take all of your, you know, great collection calls and I could clone an Adam Parks, right? And so all of a sudden, instead of having just one Adam Parks, I basically have two, but not really.
because I don’t have empathetic. don’t have, I can make it sound empathetic, right? But I don’t have that element in there because there’s no outside of saying, I’m sorry, the bot cannot relate to anything that a human might’ve gone through. And that’s where you really want to get into. That’s at the point that you need to leave the bot and move to a human. And that I think is an element that’s in there. Part of that transparency, I think is
is like telling people that they are talking to a bot right out of the gate. And if they wanna talk to a human, just say human, just say agent, you know, but on top of that, it should say, you know, the consumer says something like, you know, my dad just passed away. That’s when that bot should say, we need a human and say, hey, listen, let me pass you over to Tim. I’m gonna move this conversation over. And automatically we build that stuff in instead of just saying,
I want every debt collection call never to ever talk to a human. And so to your point earlier about, you know, this balance between humans and whatever now, you know, the people are going to be getting these calls that are, you know, higher level of escalation that they need the empathy that they need to be able to work through more of a complex problem that the bot hasn’t been trained on. So humans won’t go away, but they’ll be doing other things, maybe training the machine, maybe handling those types of phone calls, that kind of stuff. But
Having from an ethical perspective, I wanna know that I’m talking to a bot because they are so good today. Like if you do ChatGPT, she sounds pretty damn good. It’s just like, you do not know you’re not talking to and it can get a little bit creepy. It’s like you’re building this relationship with a computer because the voice is so good. In the old days, we had paws and it sounded like robots. Today latency is covered by
So the bot just says um and ah, just like we do all the time. And now next thing you know, I think I’m talking to a human when really I haven’t been talking to a bot for the last 20 minutes.
Adam Parks (16:54)
I actually interviewed a bot on a recent episode of the five minute pitch video series. So Kompato AI gave me the opportunity to actually talk to the bot and I’d been challenging organizations in the space to put me on the phone with your bot. Everybody kept saying, no, no, you’re to do a podcast together. It’s a great put me on the phone with your bot. Like, let’s do it. I’m in. Nobody wanted to put me on. They did. And it was a really cool experience for me. I want to enjoy the process. But two, was very eye opening for me. And I feel like over the last
Tim Collins (17:01)
Sweet.
Adam Parks (17:24)
asked.
three months I have really embraced the power of AI for our organization but talking about the hallucinations for example the larger the request the more likely it was to hallucinate so if I’m trying to draft an article it’s one thing but if I’m trying to draft an entire campaign it’s another and the more complex the requirements get the more difficult it gets for it to be able to do it and I found that I had to actually create series in order to get it to roll through in process and we’re gonna we’re literally
Tim Collins (17:29)
Yes.
Adam Parks (17:56)
going to produce this podcast using artificial intelligence as soon as we’re done recording. All of the short videos, all of those things are now part of that process. But if I ask it to do…
Tim Collins (18:02)
Perfect.
Adam Parks (18:08)
everything at once it’ll tell me that Tim said some crazy things and you know it’ll just really go off the rails but if I stick to small bite size chunks across the board it will provide me with some really great feedback it’ll provide me with a better video description that I could write manually because it’ll tell me the the timestamps for those important moments in that particular episode of the podcast that people will be interested to hear and see.
Tim Collins (18:36)
Yeah, I think that’s really where we’re going, right? mean, so there’s really the specialization by those agents themselves. So they’re like stacked on top, but you and I won’t know any different. We will just come out to a, let’s just say financial services, GPT, and in there will be an underwriting bot. And then underneath that will be a fraud. Underneath that will be, you know, collections, you know, all that kind of stuff will all be in there and it’ll just be one, but it’ll be, there’ll be these little
There’ll be these little sets that have been built on top of each other. And to us, it’ll be seamless. It will help drive down the hallucination. I mean, humans, we break stuff. I mean, that’s what we do. So we’ll figure out somebody’s gonna create a bot to break that bot. And that’s what’s really coming. I’ll have my bot talk to your bot. Yes. Yeah, yeah, absolutely, absolutely.
Adam Parks (19:25)
The cyber attacks on the AI are not that far behind. Let’s be frank. Phone freaking
will come back. mean, some of those older tech not like look at sometimes low tech is the best way to beat high tech, but that starts sending us down some different rabbit holes from an ethical perspective though. I have an interesting question. So your organization has been built on consumer trust.
Tim Collins (19:37)
Shit.
Yeah, for sure.
Adam Parks (19:53)
You’re one of the few organizations out there with thousands and thousands of five star reviews from consumers through your interactions. How does consumer trust get built through an AI bot? How do these things start to come into play if we’re talking about this AI communication, whether it be written or verbal, how does one organization over another start to build that trust?
Tim Collins (20:20)
Well, I think it comes back to some of those things we talked about already, which is really around transparency, right? And the ability to go talk to a human at any part in the process. So a lot of consumers don’t, especially in debt collection, may not want to talk to a human about their debt because it’s kind of a shameful experience. But maybe they do need something verbal versus typing something out in a chat bot or whatever. they’ll actually want to talk to the bot.
But at any point in that conversation, they need to be able to have that exit out, right? And the CFPB is very concerned about that exit out because you could create bots today that would create what’s called doom loops, right? Once you get in, you can never really get out. And so you’re just kind of stuck there and you need to make that as easy as possible in order to be able to maintain that trust. Cause customers may love your bot and they may want to talk to your bot more than anything else.
But at the end of the day, they’re gonna wanna know that they can talk to a real human if they want to. They’re gonna wanna know that their data is secure, that there’s privacy. All of those things that AI is driving off of, they’re gonna wanna say, this is not being used for any other purpose, this won’t show up. I won’t see a Facebook feed all of a sudden, selling me something because I mentioned I’m a fisherman, whatever, those things, or a farmer.
since we were talking about farmers. Got my John Deere tractor right there. Yeah, you won’t see those ads. So that’s kind of that trust element. And it’s so easy today to be able to capture all this data about this consumers all the time that you could use that for other purposes, other than just what that you set out to do that the customer is expecting you to do. And when you do that, you start to violate their trust. And then those Google reviews start to go down and then you, know.
It just becomes a slide from there.
Adam Parks (22:16)
They lose faith in the process itself and the doom loop is definitely something to be concerned about. You know, sometimes you can’t get out of one of the existing IVR systems to get to the next layer.
Tim Collins (22:18)
They do. Yeah.
Yes, exactly,
exactly. 0000. Yeah.
Adam Parks (22:32)
It was only maybe
10 years ago that they added the operator operator operator. If you yell it loud enough and frequently enough, they’ll actually get you through to an operator. It’s been nice that they’ve learned that. I was very curious about how the AI bots were going to be able to handle curveballs things that are not part of the script. I feel like the early technology we saw came from was if and then statements.
Tim Collins (22:36)
Right.
Adam Parks (22:56)
Now we’re starting to move more towards that generative AI and how do you create a response to that? How do you how do you put the responses in a box if it’s generative? A compliance box.
Tim Collins (23:10)
Yeah, I think
compliance box. Well, I mean, I think you’re right. mean, you’re always going to list out your keywords like the, you know, the FCC has come out and said, you got to stop opt out, you know, don’t text me all that stuff. So you’re going to put those core functionalities in there. But really the power of the large language models is really to be able to grab customer intent. And so if you think about it, you can grab whatever the intent is because the
The bots listening to the words, converting it to text, interpreting it, converting it back into, you so there’s this process that happens very quickly. That’s the latency that happens. There’s models that are now coming out where they don’t convert. They’re just using the voice to begin with. So that’s, that, piece is super exciting, but you can start to grab the intent from the words themselves. Like if they get frustrated, you layer on top of stuff that we’ve had for a very long time.
escalation in voice, talking faster, raising your voice. You know, those are all things that can be indicators that you need to move that over to a human. So it’s not layering just on AI. I think that’s that that’s great, but it’s having all of those other elements that make us human and factoring those in to be able to say, okay, there’s clearly escalation that’s going on, and I need to transition this over, or a human jumps in and listens to the call.
and says, okay, now I’m trans, no, it’s just him, he’s excited, he’s talking about his kid’s baseball game. He’s not, you know what I mean? It’s not talking about the debt collection experience. And so you can figure out that he’s talking about baseball, but you can’t, you know, if you don’t have that element with the large language model and all the other stuff layered on top, you’re just, you’re one dimensional. And so I think it still has to be figured out though, Adam, to your point, because you have to bring all that stuff together that we’ve had for a very long.
period of time and now you’re layering on those those LLMs in that process.
Adam Parks (25:09)
Which is interesting, but we’ve talked about the speed of adoption for these things and the ethical challenges that come from speed when we start doing things that we don’t necessarily fully understand. But I don’t think that speed is gonna slow down at all.
And if we’re going to be speeding up even more and I always think about I guess mathematically I always go back to the availability of data storage on Earth model and how much how exponentially I forget what the factor is now it’s either 10 or 100 but on a an exponential factor on an annual basis the sheer volume of digital content that could be stored on planet Earth is raising that fast and.
specifically since 2007 has it the amount of content that we’re actually creating to fill that storage capacity ever existed. It was really the invent of the iPhone with the pictures and videos in your pocket that exploded that data management. But what does that look like going into the future?
Tim Collins (26:03)
Yes.
Well, I think there’s a couple of things there. think you think about really AI came about because we have data and we have so much people think that, you know, AI has been around. The theory has been around for a very long period of time, but we haven’t had the computer power and we haven’t had the data sets to be able to train it to get to the large language model. To your point, 2007 Steve Jobs, the iPhone comes out. Now I can snap a picture, text message, iMessage, all of that is data. So we have more data on our phones today.
then we’re probably created in the world since the beginning of time for humans all the way up to probably to the 1950s, maybe even beyond, maybe into the 1990s is now sitting on my phone. Yeah, bring up the 2009. Yeah, yeah, yeah. And now everybody’s creating that amount of data every single year, right? That is huge, huge massive sets of data. so in order to, humans cannot comprehend.
Adam Parks (26:48)
I bring it up to 2009. I feel like that’s the moment for data storage.
Tim Collins (27:08)
let alone process that level of data. So a machine actually has to do it. And that’s why we’ve really gotten into the AI. So it’s not that AI created this. It’s like we needed AI to manage the data for us. And we’re going to continue to see that. So if you think about in the old days, what do we care about? We sent a letter, we never know if you got it or opened it unless it got returned. We made a phone call and we never know if you answered it or ever heard the message because you didn’t.
Today you can create an email, send an email, see when it was open, see if they went out to a site, see if they came back to that email, see if they went back off that link. So you have all of this data that’s happening all the time in the debt collection space, and you can trigger events off of that data. So now what happens is Tim’s gone out to the same email four times. Even though we’ve sent him 10 other emails, he’s stuck on this email. So one, he must’ve made it a favorite. Two, he’s waiting for something because he keeps going back out to the website, but he doesn’t see what he wants.
And so maybe I can give him specific content written by the model, the large language model that says, okay, I’m going to put this piece in there. I’m going to send that off to him. Hey, Tim, we’ve got a settlement offer. It’s 5 % off, you know, and away you go. And all of a sudden now that gets him to convert. I’ve just personalized that experience to Tim based upon his, you know, heuristics, what he has been doing. And I’ve been capturing that data the whole time. Where in the past we would have never captured that data. We had a status code.
You know, and remember it was a number because we couldn’t use words because it was too many digits. took up too much memory, too much space. Now I put that in a 99. What the hell’s a 99? I got to flip through the paper that was on my desk to figure out what a 99 was. Yeah, we’re so far past that today. It’s just phenomenal from a technological perspective. But to your point, it does have this impact. And there’s this impact on, you know, the data centers are getting bigger and bigger. There is there.
there is the potential for a Terminator event because the AI data centers are getting so big and using so much power that there’s not enough power in the world to run all the data centers, right?
Adam Parks (29:13)
What’s the new
the web bank situation after the inauguration of Trump, SoftBank, Oracle and somebody else came together 500 million in AI in 500 billion, I’m sorry, in infrastructure investment for AI. And one of the comments that I heard during that press conference that I found to be most interesting was somebody commented on the power. And I think it was Trump said, well, we’re going to start generating that power right there at the factories. And I said,
Tim Collins (29:21)
Yes. 500 billion. Billion. Yes.
Adam Parks (29:42)
that’s interesting because we’ve had we’ve had some rules around the creation of power as it became a utility and what that meant to those utility organizations and there’s been restrictions against actually being able to generate your own power at your own plant. I don’t necessarily agree with that feels like a very Eisenhower.
Tim Collins (29:53)
Right, right.
Yes! Yeah. Yeah.
Adam Parks (30:04)
to the problem for lack of a better standardization guru, guess. Still never went to the metric system, don’t understand that portion, but that’s another story for another day.
Tim Collins (30:16)
I want to be on
that podcast. You sign me up when we get there. Yeah.
Adam Parks (30:19)
My wife
and I had that argument in a car the other day, because coming from Brazil, she’s like, I don’t know what an inch is. Like, I don’t know.
Tim Collins (30:25)
Yeah, I have no idea.
What do mean it’s 52 degrees out? That’s boiling. I don’t know what’s wrong with you people.
Adam Parks (30:30)
We have that discussion
on the regular, especially when it comes to temperature. some reason, my brain does not work in Celsius under any circumstance. I can translate to Portuguese, but not Celsius. I don’t understand it. you bring up so many good points. I wish this podcast went on for four hours. I feel like we might have to extend this one in the future to really be able to get our hands, even begin to talk about some of the things that fall in here. But I have another interesting ethical dilemma for you that wasn’t on our
Tim Collins (30:37)
No.
Adam Parks (30:59)
original
prepless, I kind of thought of as we were talking our way through this, do you think that the personalization of messaging using artificial intelligence is building or deteriorating trust with consumers? And I’ll say the fintech consumer clearly is more open to communication with an organization in which it has no face, a faceless organization. But then you have the Ron Swansons of the world.
And I’m going to use this character from Parks and Rec episode. I want to say like the early 2010s where he got like a pop up box and said like hey Ron, you know, happy birthday or whatever. And like he had to disassemble his computer because he made him so much more nervous that the computer knew his name to begin with. Do you think we’re past that in 10 years down the road from that episode or do you still think there’s a subset of the consumers that are just afraid of the entire idea?
of communicating with the faceless organization.
Tim Collins (31:58)
Yeah, I’m going to take it one step further just to give us context here that the capability today to personalize if I have enough data on you and make it so that you don’t even know, right? That you have no idea. So there’s kind of that leap that could actually happen where most people are like, I don’t know if this is a human or AI and really what they’re going to look for because even if it’s an AI, right?
I’m gonna build in ums and ahs and I’m gonna put in typos and I’m gonna do all that other stuff and it could look like it’s a human, but the whole time it is AI. And so I think that’s what’s gonna be startling to some people and then they’re gonna be like, do I care? Because there’s nothing I can do about it. Right.
Adam Parks (32:50)
Okay, that’s the next question is about the value
of AI content. So something that’s written from AI versus something that’s not written from AI. And I’ve been experimenting with this myself on the tail end of a podcast saying, okay, let me go create these YouTube descriptions and other things to accompany the process. But then the question starts to come down to
all of the AI checkers out there, which are very important from a collegiate standpoint, from an academic standpoint. Is this your own work? Is this your own thoughts? But then the question becomes in a B2B environment, does it matter? Should they or is it an ethical violation to try and mask that a content is being created in a way in which it’s being created? Is it wrong to create it as AI content or is it wrong to just hide it?
Tim Collins (33:42)
I think right now we’re kind of in that phase where it’s almost like there’s this back, we’re back to transparency again, right? Where we want to say, because Adam, what you and I had talked about as part of the prep for this is we can take all the content that you have ever done and we could create the Adam AI bot. And so, and what we would say, we would prompt it. So it would be, that’s the only thing that’s in there. It’s not going out to the web. It’s not grabbing anything else. And we could say, I want you to write an article.
in my voice that whatever talks about AI and AI ethics and it would do it because it’s got the large language model and it’s only using you as a resource to be able to do it and it would sound like you. Like if I read it, I’m like, this is so Adam, right? And you keep putting misspellings. If you had misspellings or anything that was in there, it could do that. And so, yeah, it’s what’s for me too. It’s not how you spell that word Tim, but.
Adam Parks (34:35)
It’s definitely me.
Tim Collins (34:40)
the way it should be spelled. anyway, the so I think we’re kind of in that space in between where it’s going to make where people are going to really care about it. But it’s going to be for a very short period of time because the assumption is going to be very, very quickly that it was AI written, or that it was AI enhanced, which is what we, you know, coming back to where we started is we don’t have to make this jump to where all the jobs are replaced by AI. But we can bring it back in where we can use AI to make
you know, people better, whether it’s the back office or the front office where you loaded up all of your agency manuals for all of your clients. And it’s listening to the call and it’s going out to that agency manual and it’s grabbing the piece of information you need and it’s feeding it to your agent. Hey, will this be credit reported? Yes, it will. But we only have credit report after 90 days and it gives you the blurbs to say. And in the future, eventually it will say that blurb.
and it may sound like your voice because you’ve given consent to use your voice. And so now, now I’m working seven jobs, right? I got 60. Yes, absolutely. Yes, absolutely.
Adam Parks (35:42)
That’s where the deepfakes start and really become an issue.
And starting to play around
with Sora, I was shocked. I gave it some content of myself and I said, you know, show yourself walking, you know, show me walking off the stage after accepting an award and blah, And it provided me with this whole video. Then I started playing around with the idea of writing a children’s book and then animating that children’s book to go with it. We wrote a basic outline and storyline and fed it. And it’s been interesting to see how these things happen.
Tim Collins (35:50)
Yes! Yes!
Yes.
Yes! Yes!
Adam Parks (36:13)
Back to the ethical dilemma that you mentioned it’s the are we enhancing the people or we replacing the people I think we’re enhancing the people the way that we approached it in our organization was we identified 10 different use cases for artificial intelligence across our organization.
then I’ll work with each team to further enhance their prompts and their profiles and what needs to happen for their particular use case and will work through each use case one at a time but rather if I just came in and said this is how you’re doing it now everyone’s going to panic my God how long am I going to be here my God like I but I used to do this keyword research manually now what am I supposed to do well.
You can still do it manually, but let’s enhance the information that we’re using to make those decisions. Let’s put more or better information in front of our human eyes to make a decision. And let the human run it, because we’re not at a point of allowing it to make the decisions for us.
Tim Collins (37:00)
Yeah, absolutely agree.
No, and for to your point, still there’s, there’s still a need for the humans able to look at the work itself and test it. Like there’s all these, you know, the accounting field, there’s so much AI that’s coming out because it’s so structured, legal to a certain extent, but lawyers are protected by their bar. But there are all these GPTs that say, take your contract uploaded and we’ll tell you what’s wrong. Now I care about certain things, other lawyers care about other things. So it’s different, but I could take all of the contracts that I’ve ever reviewed.
put them into my own ChatGPT and then I can say, me how close it matches. But at the end of the day, I still have a fiduciary duty to go out and read that contract from the very beginning to the very end. And now the nice piece is, that ChatGPT has come in and highlighted some stuff for me. So that may speed it up a little bit for me, and it may have suggested language. Like, you know, a lot of times by way of example, I’ll get a provision that says,
Adam Parks (37:53)
Sure.
Tim Collins (38:02)
confidentiality and let’s do two indemnification. It’s much easier. Indemnification is one way. I will say, take that indemnification. And in the old days, I would change everything to parties and all that other stuff. I say, this indemnification language using the same exact language that’s here, make it mutual. And then it uses the exact verbiage because indemnification can be asked for in different ways. And then I can take that provision, read it to make sure it fits and then plug and play it in.
Right? And sometimes it needs some tweaks and stuff because you don’t refer to parties. It’s got to list out both and all that other stuff. But the bots have gotten so good that it can do that piece for you. And so instead of me having to write it all out and type it in, it’s already in there and I’m editing. So I think you’re going from this creation to editing, but there’s still that human element in there, which allows you, Adam, I think to your point and for your team, right? They’re allowed to do more higher level stuff. They still got to read it.
Adam Parks (38:44)
Yeah.
Tim Collins (38:57)
They have to go through it, they still have to tweak it, they still have to make sure it fits what you want it to be able to do. But part of it has already been created. And so that idea generation, that’s why that fifth level of AI autonomy is fascinating because it could just say, hey, create five business ideas for me. Boom, and away it goes, based upon whatever I’m interested in.
Adam Parks (39:19)
It sounds like Tim, over the next couple of years, we’re going to be facing a series of ethical dilemmas as it relates to artificial intelligence. And it sounds like we’re going to have a administration in the government that’s going to be rather open to some of this exploration based on that $500 billion investment feels like they’re probably going to be pro AI just just my gut instinct. got nothing else to go here but my gut. But that’s how I feel about it. I kind of hope I feel like there’s all
a lot of opportunities for the industry, for financial services, for the consumers to take the shame out of the debt collection process and better enable those consumers to leverage the self-service technology that they ultimately want, especially if they’re coming from a fintech originated product. And the more that we look at how is this product originated and how can I identify the wants and needs of that particular consumer based on that is going to be a driving factor into the future.
Tim Collins (40:18)
Yeah, and I think it comes back to add us. The fundamentals are always going to stay the same, right? And it’s where we started when you talked about. Yeah, it’s about transparency, so you will. It takes a long time to earn trust. It takes a second to lose it. So if you’re putting work out there and you’re saying it’s your own right, but it’s really, you know, ChatGPT’s and it hallucinates and cause your cause, call, call your boss inappropriate words or something like that, right? Whatever, because you never read it.
Adam Parks (40:25)
And always the truth.
Tim Collins (40:48)
The trust is gone. So that transparency on how you’re using it, the accountability, Accountability is so important. It’s something we created. It’s something we have to monitor. We have to make sure that it’s fair. We have to make sure that there’s privacy, information security, like the story that came out about DeepSeek is super exciting because you’re able to do it cheaper and faster. Well, what they did leave out is that you can ask it anything and it will do anything.
It’s released over a million records of people’s searches because there’s no information security built into it. So that, I mean, that trust is gone for DeepSeek, right? It’s just gone. And then you have that piece about, it’s got to be inclusive. And you talked about it in the biases and make sure that that piece is not in there so that everybody can use it. But I think if you were to sum it up at the end of the day, right, it’s looking at it and saying, is this better for humanity the way we’re doing it?
or is it just better for my company? Because if it’s just better for my company, that’s one thing can help the bottom line, but that may be not taking it all the way that you need to be able to go. Like 24 seven, 365 days a year you could call us and there’ll be something here to be able to answer your questions. That’s a benefit, but saying, look, I don’t ever wanna have humans, that may not be the benefit that’s there. Maybe.
You know, it’s finding that balance. So if you’re thinking about the transparency, accountability and all that stuff, it’s great. But just boil it down. Is this going to be better for humans? Yes or no. And if you say, yes, it’s better for my pocketbook. That’s not the human I was talking about. It’s better for the stockholders. I know they’re humans, but it’s not the same ones that I’m thinking about because the personalization, Adam, that’s coming will be so good that you can almost manipulate humans.
to do whatever it is you want, right? And so that’s where fundamentals.
Adam Parks (42:38)
The fundamentals remain the same do good for the consumer,
right? It’s the blocking and tackling if you remain focused on a good consumer experience. This is a great technology to help you enhance that experience. If you think this is going to replace the people in the human touch of your organization, you’re not paying attention and you haven’t listened to us for the last 40 minutes. But Tim, as we run out of time here, thank you so much for coming on and sharing your insights with me today. This was
Tim Collins (42:45)
Yes.
Adam Parks (43:08)
Fantastic.
Tim Collins (43:09)
Yes, and full disclosure, Adam, this is the real me. There is no avatar here. This is not a… But in the future, you may get Timbots. He may be here in the future. But Adam, thank you for having me. And Adam, thank you for your leadership in this space and having these conversations and putting this stuff out there. So it’s very, very important that we go in, you know, with our heads up, looking to see what’s coming around the horizon.
Adam Parks (43:30)
I appreciate that very much. And for those of you that are watching, if you have additional questions you’d like to ask Tim and myself, well, I sure as hell would not be surprised. Leave those in the comments here on LinkedIn and YouTube and we’ll be responding to those. Or if you have additional topics you’d like to see us discuss, leave those in the comments below as well. And hopefully I can get Tim back here at least one more time to help me continue to create great content for great industry. But until next time, Tim, you’re the man. Thank you so much. I can’t appreciate you enough.
Tim Collins (43:56)
Thank you, Adam. Appreciate you.
Adam Parks (43:58)
Thank you everybody watching. We’ll see you all again soon. Bye everybody.
AI in Financial Services: Balancing Innovation, Ethics, and Regulatory Risks
The rapid rise of artificial intelligence (AI) in financial services is transforming everything from debt collection to compliance management. But with this innovation comes critical questions: Can AI be ethical? How do we ensure fairness in automated decision-making? What regulatory challenges do businesses face?
In this episode of the AI Hub Podcast, Tim Collins, Chief Compliance Officer at InDebted, joins Adam Parks to dive deep into artificial intelligence ethics, the importance of consumer trust, and why human oversight in AI decision-making is essential. This discussion highlights the risks, opportunities, and best practices for leveraging AI responsibly in financial services.
Key Insights from This Episode
1. The Five Stages of AI in Financial Services
AI has evolved rapidly, moving from simple automation tools to fully autonomous systems. Tim Collins breaks down AI’s progression into five key stages that financial services professionals must understand:
- AI as a Feature – AI-enhanced software, such as chatbots in customer service.
- AI as a Product – Standalone AI-driven tools like fraud detection systems.
- AI as a System – Integrated AI workflows that automate decision-making.
- AI as an Agent – AI-powered virtual assistants managing financial operations.
- AI as an Autonomous Organization – Future AI-first businesses with minimal human involvement.
Key Takeaway:
While automation can streamline processes, financial institutions must carefully manage AI-driven decision-making to ensure compliance, transparency, and ethical responsibility.
Notable Quote:
“AI isn’t replacing humans—it’s redefining their roles. Human oversight remains critical to prevent bias and ensure fairness.” – Tim Collins
2. AI Bias and the Ethical Dilemmas of Automated Decision-Making
Artificial intelligence is only as good as the data it’s trained on. Unfortunately, AI models can inherit biases from historical data, leading to discriminatory lending decisions, unfair credit scoring, and unequal debt collection practices.
Example:
The Goldman Sachs Apple Card controversy exposed how AI-powered credit underwriting offered significantly lower credit limits to women than men—despite similar financial backgrounds. This sparked concerns about AI transparency and fairness.
Key Takeaway:
- AI must be explainable – Financial institutions must ensure AI decisions can be understood and justified.
- Bias detection is crucial – Continuous monitoring and auditing of AI models help prevent discrimination.
- Human oversight is necessary – AI should support, not replace, human decision-making in financial services.
Notable Quote:
“The CFPB has made it clear: If you can’t explain how an AI system makes decisions, you can’t use it.” – Tim Collins
3. Consumer Trust and AI: Transparency is Non-Negotiable
With AI-driven automation becoming more common, consumer trust in AI is more important than ever. Financial services firms must be upfront about how AI influences decisions in areas like:
- Debt collection – AI-driven communications (emails, texts, chatbots).
- Lending decisions – AI-powered credit risk assessments.
- Fraud detection – AI monitoring for suspicious transactions.
Consumers deserve to know:
- When AI is being used.
- How AI affects financial decisions.
- What rights they have in disputing AI-generated outcomes.
Key Takeaway:
Transparency fosters trust. Financial services firms should clearly disclose AI usage and provide human alternatives for dispute resolution.
Notable Quote:
“Consumers may prefer AI for convenience, but they must always have the option to speak with a human.” – Tim Collins
4. The Regulatory Landscape: AI and Compliance Challenges
AI adoption in financial services has outpaced regulation, creating uncertainty for businesses. Agencies like the Consumer Financial Protection Bureau (CFPB) and Federal Trade Commission (FTC) are actively scrutinizing AI-driven decision-making to prevent unfair treatment of consumers.
Key Regulatory Concerns:
- The “Black Box” Problem – AI models must be explainable and auditable.
- Bias and Fair Lending Laws – AI cannot disproportionately impact protected groups.
- AI and Privacy Laws – AI-driven data collection must comply with privacy regulations.
Key Takeaway:
Firms using AI must implement strong compliance frameworks to align with emerging regulations and avoid potential legal risks.
Notable Quote:
“Regulators aren’t against AI—but they demand fairness, explainability, and accountability.” – Tim Collins
Actionable Tips for Ethical AI Implementation in Financial Services
- Establish AI Governance and Compliance Protocols
- Create AI compliance frameworks to align with industry regulations.
- Conduct regular audits to detect and mitigate AI bias.
- Use AI Explainability Tools
- Deploy interpretable AI models that provide clear reasoning for financial decisions.
- Maintain human oversight in AI-driven processes.
- Prioritize Consumer Transparency
- Clearly disclose when AI is being used in decision-making.
- Offer human escalation options when consumers dispute AI-generated outcomes.
- Monitor AI for Unintended Consequences
- Regularly test AI systems for fairness and accuracy.
- Develop internal AI review boards to oversee compliance.
Episode Timestamps for Easy Navigation
00:00 Intro – AI in financial services and today’s guest, Tim Collins
03:15 The Five Stages of AI – Where we are and what’s next
08:02 Regulatory Challenges – How compliance agencies view AI decision-making
14:30 AI Bias & Consumer Trust – The risks of black-box decision-making
22:16 Human Oversight in AI – Where automation meets accountability
35:42 The Future of AI in Collections – Ethical considerations for scaling AI
Explore More AI Insights
- Watch This Episode on YouTube: https://youtu.be/cDB56v68G-A
- Learn More About InDebted: https://www.indebted.co/en-us/
- Connect with Tim Collins:https://www.linkedin.com/in/collins12/
About Company
InDebted
InDebted is a global, technology-driven debt collection agency focused on transforming debt recovery through AI and machine learning. With operations across multiple regions, it prioritizes a personalized, consumer-friendly approach to collections.