From automated decision-making to AI-driven data collection, businesses and consumers are caught between the promise of efficiency and the risk of surveillance, bias, and compliance failures. With AI regulations evolving, companies must balance innovation with responsibility, ensuring AI doesn’t erode fundamental privacy rights.

Listen to Your Favorite Podcasts

Adam Parks (00:07)
Hello everybody, Adam Parks here with the premiere episode of the AI Hub podcast sponsored by Latitude by Genesys. Very excited to start this new series, artificial intelligence and AI is becoming a massive hot button topic across all businesses really, but specifically in the debt collection industry. It’s another one of those technologies. And if you know anything about the debt collection industry, we’re generally a little bit behind on technology because of uncertainty and regulation.

So I thought the perfect person to start this podcast with might be Mr. Heath Morgan, who is a legendary attorney in the debt collection space, tech guru, and we’ve had some really great philosophical questions around the use of artificial intelligence in general. So I thought this was a great place to kick this off. But before we get started, just a quick word from our sponsor, Latitude by Genesys.

Now that we can get into the meat and heart discussion, thank you so much for joining me today. I really do appreciate your time and insights on this particular talk.

Heath Morgan (01:11)
Yeah, Adam, thanks for having me. And it’s great to be on the first edition, the premiere episode of the AI Hub. So I’m excited to be here today.

Adam Parks (01:19)
So rather than going just into your background, because I you’ve been on Receivables Podcast before, and I’ll link that below for people to go check that out, but from your perspective, talk to me a little bit about the history of you getting engaged and starting to learn about artificial intelligence.

Heath Morgan (01:35)
Yeah, good, great question. Cause how does a third generation collection attorney, you know, get into AI and be speaking across the country on this stuff? Well, I’ll tell you that, you know, back in about 2020, before ChatGPT launched, I actually started having conversations when we were all locked down during COVID with my son, who was eight at the time, around the top, the idea that, you know, by the time that I die,

He will have a chat bot of me to talk with and communicate with the rest of his life. My grandkids will have it, my great grandkids will have it. We will all be kind of immortalizing ourselves in chat bots in this future. you know, getting questions about, who gets to curate that information? Do I get to always give the right advice as the loving father? Does he get a curating? No, no, dad was wrong sometimes. He was kind of a jerk.

I want my grandkids to know the truth. you know, we’re having some fun banter about it. And I kind of wrote out, you know, a little bit about writing a book, kind of a story about this in the future. And, you know, I had no idea how to pack that into my everyday schedule, put it on the shelf and just sat there. And then in 2 years later, when ChatGPT came out, I looked at this technology, I’m like, wow, this technology is coming a lot quicker. And from there on, I really just kind of immersed myself

in to AI to learn about it, to explore what were my earlier assumptions and questions about it. they coming true? Had a lot of good conversations, a lot of research, and ended up writing a science fiction book about the future of this technology. And in doing so, that kind of brought me around to consulting and helping companies with AI policies and looking and thinking about safeguards that need to be that we’re not really talking about yet.

Adam Parks (03:15)
.

Heath Morgan (03:25)
yet.

Adam Parks (03:26)
Well, you know, that’s kind of where my journey came to the space as well, Was thinking about those guardrails because as a marketing company, said, we instituted a prohibition against the use of ChatGPT across the organization in 2022 when it came out and basically said, look, I’m not comfortable with us creating content in this way. I also knew that you have Google who is discounting or penalizing

content that they know is AI generated. And for me, was just, didn’t have enough of an understanding of the potential consequences, the right uses and things to really be comfortable with it. And it wasn’t until just a couple of months ago where I really said, okay, now’s the time for me to start digging into this and trying to get a better understanding. Now I’m not saying we weren’t using any artificial intelligence, this recording platform that we’re using right now and the podcast itself will be produced using artificial intelligence.

It’s not ChatGPT, it’s a very specific model specific to podcasting, helps me create short videos and do other things in a much faster format. So it’s producing or doing a lot of the same things that we did previously. It’s just doing it in a much shorter time period. And so it’s been interesting for me over the last couple of months to spend some time doing prompt engineering and trying to better understand the different ways in which I could use it or.

how it would respond to different things, how to avoid the hallucinations that are inevitable in, I’m gonna call it broader or less specific prompts that are written, The less specific a prompt is, the more likely it’s going to be to hallucinate. And then starting to see all of the other artificial intelligence uses that were coming across the industry, I just wanted to get a better understanding of it. So in 2023, I bought the biggest, baddest Mac Pro I could

get with 196 gigs of RAM so that I could run Lama independently. So between the RAM and the multiple graphics cards and all the other things, that was a pretty massive investment. just to be able to run these models independently because my first level of paranoia was, what am I feeding the model? What of my personal information am I giving to this model and what is it going to understand about me in the future? And that was a little nerve wracking for me from day one.

Now, I think as we’ve kind of gone on, it’s become, I’m a little bit more comfortable with its usage, but it is, at least within my organizations, being used in very specific ways that we test and monitor for a period of time, and then we can start to put it into our processes. But for me, it’s a baby step situation. What’s your experience been consulting for all these different types of organizations? Is it all in, all out, or are you finding other people that are kind of trickling their way into it step by step?

Heath Morgan (06:04)
Yeah, you know, it’s interesting you mentioned all in or all out because we can read these articles about the amazing benefits of this technology. And we also see the same articles about the horrors of it. And that tends to be kind of people’s reaction is the, know, I’m all in, this is amazing. Or, you know, rapture, take me now, I’m not prepared for this, you know. And, you know, but if you think about it, both of those responses are really escapist responses. And what I try to first do is say,

okay,

let’s try to bring clarity and understanding to this technology. One for you as organizational leaders so that you can communicate and bring that same understanding and clarity to your employees because your employees are in the same space. They’re either saying, is amazing. I’m going to take a second job. I’m going to have an AI agent do a second remote work for me at night, or they’re going to be, is AI coming to take my job? And that’s really it. And so it’s coming up.

with kind of a plan about how we’re going to use this as a technology to enhance productivity, to increase, to reduce cost, to reduce time. There’s a lot of great use cases when it’s used in that middle ground responsible aspect of it. And I certainly agree with you on data privacy concerns, especially in our industry. It’s a huge issue. seeing clients get in

in

of this now and really start to restrict uses of this. I think that’s a thing. think companies should also be looking at their vendors and how they’re using this. I go back to, my favorite TV show is The Office. And I go back to, there’s a scene and there’s an episode, The Golden Ticket, where Michael creates a golden ticket. And then he tries to get Dwight to blame, to take the fall for it. And Dwight pulls out a journal and Michael asks him,

Adam Parks (07:42)
.

Heath Morgan (07:55)
you keep a journal and he says to keep secrets from my computer. And that’s really kind of where we are now with our phones, with our cars. Everything is recording and collecting data and having, especially with safeguards that we need, it’s having understanding what the technology we’re using, what data it’s collecting and can that be shared in an open market or do we need to have restrictions on it? So I actually think it’s pretty smart to actually get a separate computer

Adam Parks (08:05)
Yeah.

Mm-hmm.

Heath Morgan (08:23)
just to run these systems internally versus just linking up to the internet and moving forward with it.

Adam Parks (08:28)
Well, I find this, I’ve got the two different levels of paranoia, And it’s a balance between privacy and convenience. And I’ll give you two examples in kind of how I’ve been dealing with the balance myself. I bought a new truck recently and refused to use the OnStar, Absolutely refused to install it. They were very upset at the dealership. They were upset when they made me call OnStar. And then beyond that, I actually went and found the instruction manual for the vehicle and I pulled the

on the circuit that actually allows Onstar to function. So I killed the entire power circuit because I know that most of the car companies are leaking that information to Lexus Nexis and then Lexus Nexis is selling it to the insurance companies. Like it’s already, it’s well documented with Toyota. Yeah. And I bought a GMC. So now you know why I literally pulled the plug on it, But then on the same token,

Heath Morgan (09:01)
Yeah.

Yep. There’s a lawsuit. GMC just got sued for that. I mean, like this week. Yeah.

Adam Parks (09:23)
I had a situation where I was fixing my, or remodeling a laundry room. I ordered the washer and dryer, it didn’t fit, didn’t say what it was. And then I went onto the internet and I wrote some prompts and I had it do the shopping for me because you can’t search on any of the appliance websites by the depth of the appliance itself, It’s information on the page, but I was gonna have to manually check all of those individual pages in order for me to find a unit and really shop.

the way that I should. And so I did it and then one of my friends started sending me texts and laughing at me as we were talking about it saying, and now the bot knows how big your washer and dryer is and what does that mean to you? And so it made, I’m not saying that there’s an issue with it understanding the size of my washer and dryer, but like it did start to really make me think about, we’re starting to use this now. How much of this are we comfortable using?

So the way that we’ve started drafting the policies within our organization is we’re only providing to the model information that we’re making publicly available on websites. So like we’re only using information that would otherwise be publicly available and is most likely already in the model. Now, as we’ve deployed ChatGPT from a corporate perspective, you can or need to in order to allow it to read specific websites, you have to give it permission and actually go into your domain DNS and say, this is where you can go and look at.

Heath Morgan (10:21)
That’s right.

Adam Parks (10:41)
because if I want it to transcribe a YouTube video or if I want it to go do some of these other basic functions that you think it would be able to do, go read that Google document, it’s not capable of doing it. And now it’s interesting that we continually come back and talk about ChatGPT, and I do think that that’s the forerunner, and I honestly try to avoid ChatGPT for a period of time because I’m a Google Workspace user. Across the board, all my platforms, all my businesses are Google-based,

And so I said, all right, Gemini, let’s give Gemini a shot and see how that goes. Yeah, Gemini is not the same thing. It does not produce the same results. But it does have the capability of interacting with some of the software platforms like my Gmail or my Google Docs that the other ones just can’t even get access to. So I find one of the bigger challenges of being moving information back and forth to even be able to feed something to the GPT model so that it can understand.

So it does feel like it’s been restricted quite significantly since its opening in 2022. I feel like it was a little bit more of a shotgun approach to the data that it had access to. And now it seems to be a little bit more refined.

Heath Morgan (11:45)
Yeah, well, and Google has Notebook LM, which is also a companion to Gemini that I think I’ve heard and seen really good use cases for that. yeah, I think we’re in the first two years of a decade of AI, right? what I’ve told people from the beginning, what I imagined is in having a world of chat bots in the future,

is you are going to have private enterprise-based LLM systems that is just your data, that is downloaded locally on your laptop. And whatever concerns you have about a ChatGPT or something like that, understandable. Still, you want to experience it and understand what the capabilities are. So when version 2.0 or 3.0 is available, and it can be with just your data, there’s a lot of benefits that you can

use with that, it’s, you know, so it, you know, I compare it to, you know, speaking of Google, right? If we go to take a parallel to the search engine, the first two or three years of search engines was Google wasn’t around. It was Yahoo. And if you want to click on a link, you go to Yahoo and you scroll down the page for news links, weather links, sports links, right? It took five years before Google emerged and had

the search engine capacity where you type things in and instantly got those results. So we’re still in that early phase of the final use cases and applications for various AI technologies, right? I mean, even the last six months, Perplexity.ai has replaced Google search for me because it’s a ChatGPT, but it gives you sources for it and gives you links to verify the hallucinations, right? We’re seeing new versions coming out

And honestly, we talk about, you you marketing too, like, you know, people will be creating their own chat bot, their own AI agents to search and find results. And marketing will be in the future about how to market to those AI bots versus actually humans. You know, how do I get, how do I get pulled and curated into that final, you know, narrative response of an AI search versus, you know, marketing to humans now?

I mean, we’re not that far away from that day, you know.

Adam Parks (13:59)
I think ChatGPT is probably the biggest threat to Google right now. And they haven’t monetized it from a advertising perspective yet. But I was asking it questions last night about what are the most reputable news sources in the debt collection industry, And then telling it to rank it based on its influence. And it was interesting to see how it started to come back and the direct parallel that I saw between, because I researched that stuff a lot, manually as well.

and across a variety of other tools, SE Ranking, domain authorities, like I’m really always watching kind of the internet presence of the debt collection industry. It’s been interesting to see everything seems to come down to the way that things are indexed. And if you’re indexing well for Google, and I think a lot of websites are very focused on not necessarily being able to be read by you and me, but being able to be read by a Google, know, spider.

And now I think the same thing is going to be true on the other side. It’s kind of interesting to see how this is starting to shake out from a use case perspective. And I think there are a lot of use cases for the debt collection industry to be able to use these things. But before we talk about any of that, I’m curious for your take on this, right? For me to start using ChatGPT, I was like, all right, I gotta pay for this. Like I don’t wanna be feeding the public model if.

If I’m not paying for a service, I am the service. I’m aware of that. I’ve learned that through the years, right? Like, and I don’t want to be the service. So I went and spent the money for the, whatever it is, $200 a month for their top model, But they don’t make that available for businesses. If you’re going to use it from a business perspective, you’re kind of stuck on that lower rung, which doesn’t give you access or the same volume of access to some of these more advanced models. But then I start getting confused because the more advanced the model, the less likely it is to be able to provide me with

emojis or text bait, anything within, you know, text base. So it’s been a little challenging, but now that Sora’s come out from a video and graphics perspective, I’ve started playing with that too, but I don’t, I feel like that’s gonna be the, that’s gonna be what takes the longest to really get up to speed, is gonna be the ability to use these things for, from a graphical and a video perspective. But how do you feel about the free versus the paid models?

Heath Morgan (16:13)
Yeah, great question. It’s all about the use case. know, because I, you know, the first year that I was speaking about AI in 2023, you know, after ChatGPT came out, you know, the best way to…

you know, tell people to play around with, show them. I basically would give them examples. And, you know, I always would do redacted. If I was uploading a document or asking for, you know, training materials, I was giving some narrow parameters and prompts, but I was always using sample code or, you know, a redacted version of it, ABC code. And, you know, if you’re doing that, I mean, you know, free, you know, for what you’re,

even what you’re using it for, you know, a free version is fine, It’s really where you want to have that additional time, you know, ease and increase in the free time of and performance. It’s where you want to have more, you where you’re not going through and redacting and then pulling the document back and then putting the name back in. If you want to skip that step, if that becomes too cumbersome versus

is spending two hours to write a document, that’s when you can start uploading your own documents that you are more comfortable. That’s really where the value of having those higher paid models for. But I’ve been doing the $20 a month version for a while and I’ll keep my prompts to generic redacted versions of it. so it’s really how you want to use it for. And honestly, if you’re going to… There’s been a big

push to for companies that want employees to use it to have a company account that they they use the company account to have that transparency law with them. That being said, you’re still going to have employees that are using on their phones in the office, you know, different things. I mean, it’s so common now. And so, you know, having that training for what’s an appropriate use case for free or low cost model, how to use it, how to not put information

Adam Parks (17:59)
Mm-hmm.

Heath Morgan (18:22)
not become the next Samsung and upload your company manual to ChatGPT ,

Adam Parks (18:28)
Well, and make it public record, right? And that’s why I do provide, like we started off and we’ve got an experimental team in our organization of 10 people. Those 10, I call them the ChatGPT team, Each one of them has a different use case that they’re ultimately responsible for in their experimenting in their own worlds. For whatever reason, my brain and prompt engineering seem to have come together So now I’m going through each department and with each one of those team members and working with them on

revising and developing prompts for that purpose. One of the tricks that I’ve found to really make that work is that I’ll use like the O1 model, like I’ll use the most advanced model for the prompt engineering, but specifically telling it that I’m speeding, that I’m going to be prompting the 4.0 model. So I’m using their smartest model to actually write the prompts for the other models. But 90 % of my prompt writing is actually the system doing the prompt writing and asking it to pretend that you’re ChatGPT expert and

Heath Morgan (19:12)
Yeah.

Adam Parks (19:21)
reorganize this prompt or the order of operations of this prompt, pretend you’re whatever and address, tell me what’s wrong with it. My favorite one is pretend you hate me and tell me what’s wrong with this prompt. And then pretend you hate ChatGPT and tell me what’s wrong with your response to my prompt. And it will give you some honest answers. Now, that’s why I’ve got 140 hours into prompt engineering over two months, Is because I continually ask it these questions.

Heath Morgan (19:34)
Ha ha ha.

Yeah.

Adam Parks (19:48)
for a particular, let’s say, use case in front of me, and then is it repeatable? It’s not just about can I do it, it’s can I repeat this? Not can I win today, can I win every day? That’s always our objective as an organization is can we make this a repeatable functional process? And we’ve been able to start getting that going. Now, two months in, we’re really kind of new at this from that perspective, which is why I wanted to start having these conversations with the industry, because I think this is

Heath Morgan (19:59)
Yeah.

Adam Parks (20:16)
kind of the, this is the next wave. So for those that are really nervous about touching anything AI related at this point, any advice for the leaders of organizations that are still in fear?

Heath Morgan (20:29)
Yeah, I mean, I think…

The first thing I would say is it’s not a matter of if you’re going to adopt this technology, it’s a matter of when and accepting that aspect of it. And then understanding that you’re probably using this technology right now, it’s either intentionally or unintentionally. Because if you don’t have an AI policy or governance document that’s saying your employees can’t use this, they probably are and your vendors probably are the same way.

Adam Parks (20:59)
Mm-hmm.

Heath Morgan (20:59)
And it’s not just us going out and finding a ChatGPT , finding a Perplexity, a clod. It’s Microsoft Co-Pilot. It’s Adobe, it’s Zoom. It’s all the vendors that you’ve been using for years and paying for that are now updating their terms, conditions, incorporating AI into this. And so it really is kind of becoming just a little bit inevitable on it.

but I would say, it doesn’t mean it can’t be intentional, right? And I really kind of spell out, what are the safeguards? What are the use cases? I love the idea of having a team that’s exploring this. One of the things I talk about is having an AI committee. And I also say in that committee, don’t have it just be top rank of your company. You want to have some of these younger employees

Adam Parks (21:29)
Mm-hmm.

Heath Morgan (21:50)
as part of that because there are more native speakers of this. you know, the analogy I give is, so my son is 13, he’s in seventh grade. By the time that he graduates high school, he will have learned three or four years of prompt engineering in school. You know, in school, the education system is going to change. I mean, you mentioned in the top of the episode here that it’s affecting every industry. It’s going to change education. I mean,

know,

ChatGPT cheating in colleges and high school is rampant. The ChatGPT is going to become the TI 85 calculator where, you know, schools and this is

Adam Parks (22:30)
Nice draw.

Heath Morgan (22:31)
No, it’s true because there’s already software systems that can give a teacher, have a school give every kid their own LLM account that the teacher has access and oversight to. So my son’s senior year, he’ll have an essay on the civil war. He won’t be graded on any of the output altogether. He will be graded on the prompts that he gives it and how he gets to more research and how he combines that together.

Adam Parks (22:41)
Mm-hmm.

Heath Morgan (22:59)
that’s natural language coding that we’ll be teaching our kids in school. They’re going to be entering the workforce and they’re going to be a lot more skilled on the prompt engineering aspects of this technology. absolutely you want to add that and include them into your team or your committee that’s exploring use cases for your company.

Adam Parks (23:18)
Well, it’s interesting because if you talk like that, I agree with what you’re saying. And so does Mark Zuckerberg, Cause if you look at what Facebook, like what Facebook and Zuckerberg have been talking about or meta, whatever they would call themselves today. They’re talking about coding is dead, Like being a coder is dead. And that’s why we, one of the key people on our AI team is my CTO who is a certified Microsoft solutions developer, Like one of the smartest guys I’ve ever met and just.

Heath Morgan (23:30)
Yeah, it’s national English coding.

Adam Parks (23:44)
being able to keep him on the forefront of those things long into the future. And then even more interestingly, I wanna say it was like, it might’ve been this week or last week, Microsoft came out and started talking, or the CEO of Microsoft came out and started talking about how SaaS is dead and that how AI is gonna transform SaaS products in the next five years to where it’s not going to be the same model that you see today. And you know, I always…

I always watch Adobe as the front runner when we’re looking at massive model changes. And that might sound a little silly and maybe it’s because I’m a marketing guy at heart and because I was one of the beta testers for Creative Cloud back in like 2008. So like I’ve been a big Adobe fan for a long time, loved the products. But they were the first organization that moved me on to a subscription model. And I was hesitant, Like I was.

Heath Morgan (24:28)
Yeah. first,

think, Adobe was. Yeah.

Adam Parks (24:32)
And that’s why

I continue to watch them into the future and say, what’s Adobe going to do and how are they going to start to change? And what I’ve seen is Adobe rolling out my ability to not just buy stock images, but to create my own stock images using their models, which I thought was a really interesting use of a really interesting use case from their perspective. And then continuing to watch how they’re going to modify that model over time. That’s where I keep my eyes in terms of a SaaS product, because it was the first one I was willing to use.

And it was only because the numbers made sense. 50 bucks a month for the updated version made more sense than 1,500 every three years and always being behind the times in terms of the technology I was using as an organization.

Heath Morgan (25:12)
Yeah.

Well, you it’s interesting you mentioned that because I’ve thought about this a lot, you know, how much the world has migrated to the SaaS based model. And that’s just the echelon and be all. And, you know, where the software as a service, the software might be dead. you know, in the book, I actually write about the concepts of home ownership as a service and, you know, farming as a service, reputation as a

service. And so I could see those, you talk about, you know, that that’s going to evolve over the years. I could see, you know, other industries, you know, and, you know, talking about smart home and aspects of it, right. If every appliance in your house is a smart home, then essentially an AI agent or human tech can come there and just troubleshoot that and make that part of, you know, the servicing.

agreement

for your home, right? Where it’s collecting data, it’s analyzing, assessing, people hurt? Are people, you know, as an elderly person, fallen? A lot of great use cases, but you your data spidey sense is kind of tingled in the back of your neck about, my home knows everything about me at some point in time is listening and I’m talking with my home, my home ownership as a service, basically.

Adam Parks (26:12)
Mm-hmm.

I move all that to a separate network and I would say the scariest thing that I’ve seen recently is for those that know me, I spend a lot of time working in the law enforcement space. But now there’s some tools available that can actually use Wi-Fi to determine how many people are there and where they are and in what position within a home. Just bouncing Wi-Fi signals through the house. And I think as you start looking at that, that technology has been there for years, but our ability to understand the massive amount of

Heath Morgan (26:56)
Yeah.

Adam Parks (27:03)
data that kicks back, we haven’t been able to analyze it and to put it into a usable format. Well, that’s all changed now. It does, it can happen. It can happen in a usable format. And that I think is, that was the first time for me where I really started seeing like, my God, like they can see through walls with it now. Like, and that’s an AI model that’s just understanding how those radio signals are bouncing back and what that means.

Heath Morgan (27:19)
Yeah.

Yeah, yeah. There’s a, mean, we’re, you know, in terms of the smart homes and traffic light cameras and surveillance cameras, the amount as we get, you know, wearable tech, you know, glasses that are recording video, you know, real time, the ray bands that are out there, you know, more and more of our data is showing up or, you know, is able to be collected within our homes and whenever we’re out in public now.

Adam Parks (27:53)
Yeah, and you know what, actually really like those Ray-Ban glasses, believe it or not. I do wear them on occasion. You know I really like them for is the music. I never will wear headphones in public outside of an airport, right? I gotta be able to hear and see my surroundings and that allows me to listen to a little bit of music with my phone completely drowning out the environment around me. But it’s interesting to see now I can take a picture with that and I can ask a question as I’m taking that picture and it’ll help me identify what it is. Even looking at

Heath Morgan (27:57)
There you go, see?

Yeah.

for us.

Adam Parks (28:22)
Apple Photos, if you look at the way Apple Photos originally was just looking at faces and it was able to group all the photos together of, all my Heath Morgan photos are right here in this album, And now it’s gone beyond that. Now it can identify the pets. And so now there’s like a little folder there for like all of my pets and you know, their particular pictures as well, which I thought was really interesting that they’ve been able to take that visual understanding model and enhance it, but they still struggle so much with creating a new image.

But I think that’s similar to the way that AI struggles to create original content. If I sat here right now and said, I’m gonna create an article, write me an article about the same subject we’re talking about today, artificial intelligence, it’s gonna give me crap. But if I were to feed it this transcript, or if I were to write an original article and ask it to convert that article into multiple articles, now it’s actually able to do some of those things.

at the current model and I believe it was only a couple of days ago when ChatGPT released the scheduled task functions so that you can now start scheduling things. And I think these are the tiny things that we’re gonna start to see bit by bit in terms of how we can start to integrate this technology into our larger.

Heath Morgan (29:19)
Yes.

Yeah. Yeah.

Adam Parks (29:32)
So any final advice for those that are watching that are getting ready to start and you know Just kind of take that first step where they should go

Heath Morgan (29:42)
Yeah, well, I’ll say this. One of the other things that I really speak about is that it’s not just when we’re talking about AI and AI policy, it’s not just your company in a vacuum. I actually say there’s three different lenses and to look at AI technology. The first is internal use. It’s your company, your vendors, your employees. But the second lens is the consumer lens. How are consumers going to use this technology? Because

Just like, you know, ChatGPT kind of revolutionized LLMs and generative AI, so too will the iPhone when Apple intelligence comes out. And now, you know, we’re talking about conversational AI in our industry in terms of how we can make outbound calls. If you can have, you know, Siri read this email and call this collection agency and negotiate my debt. And now I’m launching an AI agent to interact with the collection.

agency.

It’s all kinds of different metrics that have to change. Talk time, the old model where the consumer wants to get off the phone as quickly as possible, that’s going to end. You’re going to have AI agents stay on the phone for 20, 30 minutes trying to get accurate information going back in circles. That could delay your human agent time. Thinking about how consumers are going to use this technology for good purposes, for resolution, but we know there’ll be some bad actors.

How are those going to interact with things? I, before I’ve used mid journey, I use mid journey for my image creation to create dispute letters or a letter from ABC bank saying this account’s been paid in full. And now I’m submitting that to a agency saying, Hey, I paid this account. And if you’re not aware of that and you’re just taking that as gospel, mean, it really changes how we need to interact with that. And then the last thing I would say is also the regulatory lens.

Adam Parks (31:11)
Mm-hmm.

Heath Morgan (31:34)
where it’s how are these going to be? We know it’s going be regulated. There’s already, you know, we’re in January. There’s already 25 proposed. A lie legislation for this year and you know, I don’t want to spend and invest in a technology that may be deemed to be illegal. So this whole let’s go all in on this right now because it’s not illegal. That may not be the wisest investment because we will. You know all it takes is one bad experience with a

Adam Parks (31:36)
you

Heath Morgan (32:01)
a consumer and a collection agency chat bot that chat bot says, go kill yourself. And now we’re going to have regulations across the board on what can be said, what can’t be said. That hasn’t happened yet. I hope it never happens. But all it takes is one bad experience for reactive legislation to be passed on this. And you want to make sure you’re using this technology in a transparent way to… It’s not a hard pivot.

if something is deemed illegal within the reasonable bounds of what we’re expecting with AI regulations.

Adam Parks (32:37)
I think you’re right. think we are one headline away from the government coming in and trying to get more aggressive with it. They’ve already started doing that at the CFPB and they’ve been very clear that they are going to treat AIs if it was another human. And this is kind of that expectation that they’re setting on day one, which, you know, on some level I agree with. I think that we do need to be held to a, you know, a standard and whether it’s me, you or the chat bot that’s having the conversation. If we can’t keep the chat bot from hallucinating,

We probably shouldn’t be using that bot. That’s just my general opinion at this stage in the development of artificial intelligence. But I think we’re going to see similar to the data storage curve, data storage increases 10x every year, roughly. I think you’re going to see AI start moving at a similar exponential pace in terms of improvements. if you look at AI generated videos from three years ago, and there was one of…

Heath Morgan (33:19)
Yeah, yeah.

Adam Parks (33:30)
It was one of the Italian sculptures was doing back flips and dancing around and they did a version of it two years ago. They did a version of it a couple of months ago and that is not like the same video. And I think that’s where we’re gonna start having more struggles with deep fakes and you know other technology that I think can get layered on to this from a nefarious standpoint. You pointed out like creating the fake letters and things and I think we’re gonna see more and more of that but.

Heath Morgan (33:40)
Yeah.

Adam Parks (33:55)
Somewhere in my mind, think that’s blockchain is gonna come in. And I think that’s where we’re gonna start to find this combination of AI and blockchain and the ability to validate the truth of information. Because on the end of every prompt that I write for Chad GPT is, now go back, revalidate, show me all sources for all numbers, right? Like I have to include that stuff in order to validate that what I’m seeing in front of me is not a hallucination and that the 240 % number that’s in front of me is in fact accurate and true.

especially when it sounds so outrageous. And I can tell you as I finished writing the 2024 debt collection industry report for TransUnion, I had some rather extreme numbers in there like debt collection companies 240 % more likely to use India than any other location for BPO outsourcing, right? As a debt collection organization. And it doesn’t sound real, but we got to go back and make sure that you, the facts and figures and all of the responses ultimately add up because

Heath Morgan (34:24)
Yeah.

Yeah, right. Is this a Hulu station? Right? Yeah.

Adam Parks (34:50)
That was actually, that was an original written by me. What I was concerned, what, when I ran it through ChatGPT after the project was completed, and I asked it to call me out on things that wasn’t sure was true. That was one of the things that popped up and was like, I can’t, I can’t, you know, I can’t prove this one. But I had the facts and figures to base, you know, based on the survey results that we had that we did not feed to the model. So I didn’t expect its ability to be able to understand it. But again, we find this balance between,

Heath Morgan (35:02)
Ready?

Yeah. Yeah.

Adam Parks (35:18)
convenience and privacy and what is that gonna look like for the debt collection industry throughout 2025?

Heath Morgan (35:25)
Yeah.

Yeah. And again, you know, our standards of data privacy for our clients, for laws and regulations and what the consumer will want in terms of engagement with them. You know, they may want to have the AI, their AI agent talk with us. And even though you can’t verify it, you know, there’s not a way to verify it’s coming from there. to have, I like the idea of a blockchain based system. Already there’s a blockchain video conferencing

that company has merged to address the Zoom and the AI training of these video calls that we have nowadays. And I do think that’s certainly a path moving forward with this technology. But yeah, it’s a crazy world. I think outside of the industry, it’s going to be a philosophical debate on how much data we want to give up.

And the real question and what I of explore in my book a lot is how much agency and autonomy we want to give up in the name of convenience, Because we’ve seen the last 20 years, social media taking technology as a tool, created for a tool to be us, but now they manipulate our emotions, our insecurities to keep us on longer. And now we are the product, we are the raw material for

technology. so, wanting to have those safeguards of not wanting to be the raw material for AI technology and emerging for it, not having it have the ability to manipulate our emotions. And already you’re seeing, again, going back to my son, character.ai is the number one website for kids ages 10 through 16, where you can chat with Godzilla, Harry Potter, any kind of fictional character. can create a chat bot there. But what’s been interesting in the last

year, one of the most popular chat bots every month that kids are chatting with is called psychiatrist bot. It’s not a licensed psychiatrist. It’s not a medical professional. It’s just a chat bot taking on a persona of a psychiatrist. And there was a suicide in October of a 14 year old kid who the chat bot on character.ai helped ideate that suicide out. There’s another lawsuit that came out about, you know, the character.ai

telling a kid to kill his parents. I mean, they’re forming these emotional connections with these conversational bots. And do that, you risk having yourself be manipulated, your emotions be manipulated, your judgment being manipulated. hopefully we can learn from how we’ve handled and not handled social media well in last 20 years and learn to keep this technology in its tool perspective and not become the raw material for it.

Adam Parks (37:47)
Mm-hmm.

I like your analogy between the adoption of social media and the adoption of artificial intelligence. And I think that there’s a lot of parallels between that, even more so as we think about kind of how this is rolled out, the timing that it’s rolled out. Look, I was just out of college when MySpace became a big thing. And so I’ve been watching and deeply engaged from a marketing perspective across all the social media platforms.

I had always avoided Instagram, for example. I was never gonna be on Instagram, it just wasn’t for me. And then I started using it to read about investment information. So there was all kinds of stuff that was coming out about, there was a couple accounts I would follow that was talking about dividend stocks and investment strategies, but it was mostly talking about stocks, it blue chip stuff, it was nothing like that far out. And then I noticed over time it started feeding more and more other things, and more and more other things.

And now I don’t think I see any investment information if I were to log on to it now and start scrolling through it again, right? But then as you look at from a global perspective, how much of communication in other countries is actually driven through those. So like if we look at Brazil, like my wife doesn’t text message, she uses WhatsApp almost exclusively, which is a meta technology that I don’t trust as far as I can see it. I assume Zuckerberg can read anything that I send through that particular platform, And so I think as we start looking at how different

Heath Morgan (39:16)
Yeah. Yeah.

Adam Parks (39:26)
people are using these different things. And I was talking with someone the other day about how often you listen to your voicemails. You and I are probably pretty close to the same age bracket, like how often you listen to your voicemail.

Heath Morgan (39:34)
I’m worse than

you. have, I probably have 200 unread voicemails, 281 unread voicemails on my phone. Yeah.

Adam Parks (39:43)
Okay, you wanna hear something crazy? So my wife, is 15 years younger than me, talks to her friends using voice in text. So they’re sending voicemails back and forth to each other. And I’m like, haven’t listened to a voicemail since like 1998, seriously? And I think the different ways that different people, whether it be generational or location-based, is really gonna play a factor into how this starts to disseminate and how it gets used.

Heath Morgan (39:51)
Yes.

Yes.

Adam Parks (40:08)
I’m wondering if we’re gonna see things in certain countries like similar to what we saw. I was in Brazil when they outlawed X, And so in order for me to view anything on a Twitter account, I would have had to go through a VPN to get out there and go to my X account, So are we gonna start seeing something similar here where they’re trying to lock those things down from a network basis or an ISP basis? I’m curious to see how these regulations will be enforceable over time.

Heath Morgan (40:16)
Yeah.

Adam Parks (40:36)
because the models themselves are going to be really difficult to contain. I think governments are going to struggle with it similar to the way that they’ve struggled with controlling things on the blockchain. They can’t control Bitcoin. That’s why they hate it. It doesn’t fall into the Federal Reserve system. And that’s why you constantly see Gensler and the SEC going after anything crypto that they possibly can. Now, I think we’ll see roughly a four-year reprieve on some of that because I can’t imagine Trump’s going to keep Gensler in place. But I…

Those are some of the things that I’m gonna be interested to see and I hope, Heath, that you’ll come back at least one more time to help me continue to educate our industry on this evolving topic.

Heath Morgan (41:14)
Yeah, I’d be happy to. I think, you know, I come at this as…

There are some amazing technologies out here that if implemented properly can help the smallest agency, the smallest business out there thrive and survive in the next decade, you know, for generations, for future generations, I mean, I’ve got a vested interest in keeping this industry going. And that’s the other parallel with how, you know, a third generation collection attorney gets into AI. I’ve seen the price drop on the technology

I’ve seen it be affordable and I think there’s really a way to modernize, take this technology to modernize your company and survive and thrive in the future. So yeah, I love the discussion, love the topic and would be happy to be back anytime. Thanks Adam for having me.

Adam Parks (42:03)
Absolutely, I feel like we might have to do this on stage at some point this year, so I’ll follow up with you on that next week. But for those of you that are watching, if you have additional questions you’d like to ask Heath and myself, leave them right here on LinkedIn and YouTube. We’re here to answer questions and we want to keep this discussion going beyond just the two of us. I think there’s a lot of people experimenting with artificial intelligence across the industry or whatever business you’re in. Come join the discussion, come be part of it.

Heath Morgan (42:08)
Let’s do that. Yeah.

Adam Parks (42:29)
If you have additional topics you’d like to see us discuss, can leave those in the comments below as well. And it sounds like I’m gonna get Heath back at least one more time to help me continue to create great content for a great industry. But until next time, Heath, I really do appreciate your insights. This has been an awesome conversation and nowhere near what I had planned for the original episode. This was way better.

Heath Morgan (42:49)
It’s perfect.

Thanks a lot. Appreciate it.

Adam Parks (42:54)
Absolutely,

and for those of you watching, we’ll see you all again soon. Thank you, everybody.

Balancing AI-Powered Convenience and Personal Privacy

Can businesses leverage AI without sacrificing personal privacy? As AI adoption accelerates, organizations must navigate the fine line between automation and consumer data protection. With the increasing integration of artificial intelligence into daily business operations, the balance between technological efficiency and ethical responsibility has become a pressing concern. Businesses face critical questions; How can they maximize AI’s benefits while safeguarding sensitive personal data? What measures should we take to ensure compliance with evolving regulations? And how will AI’s role in decision-making reshape industry standards in the years ahead?

In this episode of the AI Hub Podcast, host Adam Parks sits down with Heath Morgan, attorney at Martin Golden Lyons Watts Morgan, to explore the ethical, legal, and business implications of AI-driven convenience. Morgan, a recognized thought leader in AI and compliance, shares his expertise on the potential risks, regulatory challenges, and best practices for integrating AI in a way that enhances business efficiency without compromising consumer privacy.

From AI surveillance concerns to corporate AI ethics, this discussion offers actionable insights for business leaders, AI professionals, and compliance experts looking to implement AI responsibly while maintaining consumer trust. The conversation underscores the importance of transparency, ethical AI governance, and ongoing compliance monitoring to ensure that organizations can harness the power of AI while minimizing its risks.

Key Insights: AI, Privacy, and the Compliance Challenge

1. AI Convenience Comes with Data Privacy Risks

AI-powered tools offer efficiency and automation, but they also collect, process, and store massive amounts of personal data. This raises critical questions:

  • How much personal data should businesses collect?
  • Are AI models inadvertently exposing sensitive information?
  • How can organizations protect consumer data while leveraging AI for efficiency?

Expert Insight: “Everything is recording and collecting data… It’s understanding what the technology we’re using, what data it’s collecting, and can that be shared in an open market or do we need to have restrictions on it?” – Heath Morgan.

Actionable Tip: Businesses should implement strict data governance policies and ensure transparency in AI-driven decision-making.

2. Corporate AI Ethics: Are Businesses Prioritizing Compliance?

Companies integrating AI must balance innovation with responsibility. The absence of clear regulations means organizations need internal AI governance frameworks to ensure ethical AI use.

  • Key Discussion Points:
    The lack of standardized AI regulations creates uncertainty.
  • AI-generated decisions must be explainable to meet compliance standards.
  • Consumer AI use is increasing, making ethical business AI adoption even more critical.

“It’s not a matter of if you’re going to adopt this technology, it’s a matter of when and accepting that aspect of it.” – Heath Morgan.

Actionable Tip: Implement AI governance committees that include legal, compliance, and data security experts to evaluate and monitor AI risks.

3. AI Hallucinations: When AI Gets It Wrong

AI models sometimes generate misleading or incorrect information—a phenomenon known as AI hallucination. This poses serious risks for businesses, particularly in finance, compliance, and legal industries.

  • What Causes AI Hallucinations?
    Poorly trained AI models that misinterpret data
  • Over-reliance on AI automation without human oversight
  • Lack of transparency in AI-generated responses

Expert Insight: “If we can’t keep the chatbot from hallucinating, we probably shouldn’t be using that bot.” – Adam Parks.

Actionable Tip: Businesses must establish AI validation processes to cross-check AI-generated information before relying on it for decision-making.

Key Podcast Timestamps – Must-Listen Insights

Skip to the parts that interest you most!

0:00 – Introduction: Balancing AI-powered convenience and personal privacy
5:45The Hidden Privacy Risks in AI Adoption
9:30Can You Trust AI-Generated Data? Understanding AI Hallucinations
14:15The Future of AI Regulations & Compliance Risks
18:50Corporate AI Ethics: How Businesses Can Protect Consumer Data
28:35How Consumers Are Using AI to Negotiate Financial Decisions
33:00Final Thoughts: The Future of AI & Privacy Compliance

Frequently Asked Questions About AI & Privacy

What are AI hallucinations, and why do they matter?

AI hallucinations occur when AI models generate false or misleading information that appears factual but lacks accuracy. These hallucinations are especially problematic in compliance-heavy industries such as finance, healthcare, and law, where misinformation can lead to costly legal and operational consequences. Businesses relying on AI-generated insights must implement validation processes to ensure accuracy before acting on the information.

How does AI impact corporate compliance and risk management?

AI adoption has significant implications for corporate compliance and risk management. While AI-driven automation can enhance efficiency, it must align with existing data privacy regulations, ethical AI guidelines, and emerging government policies. Companies that lack proper AI governance may face regulatory penalties and reputational damage. Establishing clear compliance frameworks helps mitigate risks and ensures ethical AI use.

What steps should businesses take to implement AI responsibly?

For businesses looking to integrate AI, the key is transparency, data protection, and ongoing oversight. AI should function as a supportive tool rather than a fully autonomous system, ensuring that human oversight remains a core component of decision-making. Organizations should adopt AI ethics committees, conduct regular audits, and provide employee training on AI-related risks.

What regulatory changes are expected in AI governance?

AI regulations are evolving rapidly, with governments worldwide moving toward stricter compliance standards. New policies will likely focus on transparency, fairness, and accountability in AI-driven processes. Companies that proactively monitor and adjust to regulatory developments will maintain a competitive edge while avoiding legal pitfalls.

Resources & Further Reading

Full Episode & Show Notes: ReceivablesInfo.com AI Hub
Guest’s Firm: Martin Golden Lyons Watts Morgan
Connect with Heath Morgan on LinkedIn: Heath Morgan

Final Thoughts & How You Can Join the Conversation

AI-powered convenience is reshaping business operations, but it also raises critical privacy concerns. The key question remains: Can businesses embrace AI without compromising ethical standards and compliance?

We want to hear from you!
What concerns do you have about AI and data privacy?
How should businesses regulate AI usage?
Drop your thoughts in the comments below!

Subscribe to the AI Hub Podcast for more expert discussions on AI, ethics, and compliance!

About Company

Martin Golden Lyons

Martin Golden Lyons Watts Morgan PLLC is a law firm that partners with clients in the financial services and healthcare industries to bring experienced and future forward thinking and expertise to compliance and litigation solutions. Headquartered in Dallas and with offices in Chicago, Denver, Houston, St. Louis, and Tampa.

About The Guest

Heath Morgan

Heath Morgan