Adam Parks (00:06)
Hello everybody, Adam Parks here with another episode of the AI Hub. This is one of my favorite series to record because artificial intelligence is at the heart of a lot of things that we do as businesses today. And I thought today’s episode would be extra interesting because I’ve got Tom York joining me from CastleWise to talk about the intersection of artificial intelligence and insurance. And as we’ve seen AI adoption explode, from 60% of companies saying they’d never touch it to 24% of companies saying they’d never touch it. Now down to just 7% of companies. We’ve seen this blossoming of adoption of the technology, but our insurance does not always keep up at that same pace. And as I was having another conversation with Tom at the RMAI annual conference, it was interesting to hear his take as someone who spends so much time in both collections and ensuring collection organization. So Tom, thank you so much for coming on today. I really appreciate you coming and sharing your insights.
Tom York (01:15)
Yeah, happy to be here. AI is ever developing. Insurance policies aren’t necessarily working to keep up with it. And you’re going to see things rapidly changing in insurance as claims come about and case law presents itself. So for now, it’s kind of nascent.
Adam Parks (01:32)
Well, Tom, for anyone who has not been as lucky as me to get to know you through the years, could you give everyone a little background on yourself and tell them how you got to the seat that you’re in today?
Tom York (01:42)
Sure, so I spent about 10 years in the collection industry running collection strategy and playing co-CFO for a company called ERC or Enhanced Recovery Company or ultimately Enhanced Resource Centers, eventually sold to TrueAccord. Prior to that, I worked carrier side in insurance developing products and coverages and I started CastleWise Insurance Group in 2019 and we focus on providing property and casualty insurance to the ARM industry and the tech providers that service it.
Adam Parks (02:10)
And as someone who is spending so much time on the insurance side of the world, I thought this would be a great conversation for us because my understanding is that the insurance coverages may not be covering as much as we expect when it comes to artificial intelligence. So from a high level, could you help me understand the types of insurance that a debt collection organization would normally have and kind of where that gap is today?
Tom York (02:37)
Sure, so the majority of folks that I speak to are interested in errors and omissions, also called professional liability or E&O, as well as cyber. An agency is also gonna have a general liability policy, work comp, depending on number of employees, as well as maybe a crime fiduciary or directors and officers or employment practices liability policy. But when it comes to AI insurance, think a lot of people, their immediate reaction is to think about a cyber policy, and so when you think about what the policies are actually supposed to provide in terms of coverage, cyber is generally going to be for data breach and to a lesser extent, maybe social engineering, cyber crime, funds transfer fraud. And so the E&O policy is really going to cover what you’re doing for professional services. And for the purposes of AI, it’s really going to be covered under your E&O policy and not so much your cyber. I think that there are some cyber policies that will say, we provide explicit coverage for AI. But what they really mean is AI actors acting against you and not so much your implementation of AI as an agent trying to collect money.
Adam Parks (03:50)
So not necessarily protection against the tools, but protection against other types of artificial intelligence tools being used against you in, say, a social engineering attack or a cyber attack.
Tom York (04:01)
Yeah, that’s exactly right. And so on the E&O side, depending on the definition of professional services found in your policy, it’s generally going to say something like in the act of collecting for a third party your own debt or something like that. And so a claim or an issue arising with the AI that you implement is generally going to be thought to be, well, we’ll say thought to be covered by an E&O policy. But as you’ll soon find out, the carriers just aren’t addressing it really. They’re just kind of leaving it open-ended vague. There’s no explicit exclusion for AI, but there’s also not explicit coverage for AI in traditional E&O policies.
Adam Parks (04:40)
So because it’s not excluded or included, we’re waiting on, I’m assuming, case law to establish the guardrails.
Tom York (04:47)
Yeah, yeah, case law. Yeah, and so I reached out right before we met, I reached out to one of the carriers that we use to ensure a lot of agencies. And let me, I’ll pull up and tell you exactly what they said.
I’ve steered clear of modifying my language to affirmatively address AI as there isn’t any relevant case law yet. I view AI as another spoke in the wheel of technology. Ultimately, we’re covering the professional services. The technology should just be a delivery method, not the service itself. So, you know, they’re indicating, at least this particular carrier is indicating that, yeah, we’ll just treat it as whatever. You’re sending a text or an email or you’re using a regular human agent to go and try and collect the money. And I’ll say, you know, that’s all fine and well until the claims start rolling in. Then they, then they may have a different, different opinion.
Adam Parks (05:35)
Now, have we seen policies being developed specifically to manage the risk associated with the use of artificial intelligence by your business?
Tom York (05:45)
Yeah, actually there’s one group that I work with that’s kind of at the forefront. It’s Armilla and I work with them and they’ve kind of got two policies. One would be for the collection agency, which is an AI third party liability policy that explicitly protects you from your implementation of AI agents. And so it’s a little bit more invasive than getting insured for traditional E&O or Cyber, you see in the applications there, these guys actually want to have a third party audit done on your AI model so that they feel comfortable with what they’re ensuring.
But at the end of the day, they will explicitly provide coverage for your AI models. But more interestingly, for tech and SaaS providers that are selling their AI models to collection agencies or whoever, there’s also a warranty product where they will go in and they will evaluate the model. And it can help in the sales process whereby, let’s say I’m the tech provider and I’m trying to sell something to Adam here, I can say, Adam, your liquidation, we guarantee it’s going to be at 5%. And as long as we get the warranty on it, it’s going to cover model underperformance. And so if we only come in at 4%, that insurance product will basically go ahead and pay out in damages the other 1% loss liquidation, thereby giving the SaaS or tech provider basically a guarantee, like an insured guarantee, not just a sales guy going, yeah, yeah, it’s gonna be great. It’s like, no, there is a contract behind it that says we guarantee this model performance.
Adam Parks (07:22)
That’s really interesting in terms of how they’ve got that broken down. But when we’re ensuring our business is against artificial intelligence, and this may be overly specific, but I’m going to ask it anyway, what types of, it’s called behavioral traits are we ensuring ourselves against? Are we ensuring ourselves against potential hallucinations? What kind of coverage are we getting for what types of scenarios with the supply?
Tom York (07:50)
Yeah, so it would be what’s generally termed as model underperformance, which could include hallucinations or let’s see, I’ve got some specific thoughts on what might impact a collection agency, but maybe disclosure of the existence of debt to a third party or the AI model is stating an amount that’s wrong in terms of fees or interest. Maybe the AI model is threatening escalation saying, hey, we may garnish or we may sue you and you’re not doing legal collections. So things like that. And then we would lump all of those three scenarios is the quote-unquote model under performance.
Adam Parks (08:25)
Well, that’s interesting. So you’re kind of looking at that as a total underperformance type of issue. So these are non breach related items. I wonder what kind of coverage this would provide from a regulatory standpoint as we continue to go through the regulatory shifts in the debt collection industry. And we’re not totally sure how things are going to play out, especially since the federal government has not really taken this on and it’s starting to pop up in a variety of states, which is going to cause challenges in the web of compliance that becomes required. Is there any specific language in the policies that kind of addresses that threat?
Tom York (09:04)
No, not necessarily. I mean, in terms of, and that’s some thought that I put into, I’ve put some thought into this particular subject when it comes to how do regulators view AI-driven harm.
And kind of what they care about is outcomes over intent. It’s great. The AI agent produced false or misleading statements, but regulators tend to analyze it under the same core debt collection standards. The tech doesn’t necessarily change the rule. And just because it’s a black box, there’s no AI exemption that regulators are going to give.
In terms of policy coverage for your traditional E&O policy or your AI specific, AI liability policy, you’ve got coverage for regulators coming down for fines, things like that. But with a caveat that it varies policy to policy. So just because I say, I’ve seen policies where there’s no coverage for regulatory fines or anything like that, but generally speaking.
Adam Parks (09:58)
Understood. I think that makes sense. You know, one of the things that you talked about was the confusion in the marketplace between what’s covered under the cyber insurance policies and what’s covered under, let’s say, a specific artificial intelligence policy or why cyber might not cover some of the AI instances or issues that we’ve talked about, hallucinations, model underperformance, et cetera. Where do you see that line being drawn? Or is there a clear line in your mind between those two policies and how we should view them as an industry?
Tom York (10:31)
So, yeah, I mean, in my mind, it’s fairly clear. I think that your implementation of AI gone wrong is generally, you’re generally gonna look to your E&O policy for it. With the AI specific E&O policy, they kind of have like a little bundle package that you can do. And depending on whether or not you’re agency or tech provider.
They also provide coverage for your traditional cyber. But if you’re a SaaS or tech provider, they’ve got your tech E&O, which is oftentimes bundled with your cyber. And just to define kind of what that is for the agency folks, it’s just like your E&O. But if you’re a tech provider, your product is technology. And so it’s not collections or whatever. And it’s kind of the coverage that goes along with cyber to protect in the event that your product goes wrong, like implementation or maybe services go down or maybe pushed out an update that ruined things for your clients or whatever. And so those three policies can work together for a tech provider. It would be cyber, the AI warranty, and then the tech E&O, or the agency would have the cyber and then the AI liability. And depending on the facts of the actual claim, it may pull coverage from both policies at the same time. And so maybe it’s leakage of data from the AI, which is it might have some facets of breach in there, or maybe it’s just there’s a portion of it that’s AI under performance. But yeah, no clear delineation.
Adam Parks (12:04)
Interesting. You know, there’s no clear line between it. It sounds like it’s almost like has there been a penetration or a data leakage versus something like a model under performance? It’s hallucinating and it’s causing these other issues. And I don’t know that that’s necessarily where the line is, but you know, my next question is kind of There’s two worlds. There’s the buy it and the build it camp when it comes to artificial intelligence in this space. Either it’s people that are buying technology from third party vendors or it’s organizations that are seriously considering their own internal technology and building it. How different is the coverage or policies available for protection or coverage against those two different types of risks?
Tom York (12:48)
Yeah, well I can tell you from firsthand experience, the buy it versus build it, and this was before AI was a thing, like generally commonplace, we tried doing it, not CastleWise but the agency that I worked for. And the technology just wasn’t there. And outside of, it from a risk perspective, is it better or worse? It kind of hammered into my mind that collection agencies aren’t tech companies. This is just my advice, take it for what it’s worth. Do what you do best, and that’s collect bills. And don’t necessarily try and build the next tech platform or be the next open AI, ChatGPT, whatever. That experience alone is, go buy it. But from a risk perspective, I mean, quite frankly, depending on the amount of resources you’re willing to put into the product, the buy it camp probably wins because whoever you’re buying it for probably has a more robust AI team with a more robust development team that they’re getting more reps and testing from all the clients that they’re working with. And just from that perspective, it’s like, they’ll probably put out a product that is less risky. And also you have to think about it from like a liability transfer. It’s like, well, I didn’t build it, you built it. You’ve got a techie and hit policy. You’ve got a cyber policy. Like I need to be added as an additional insured. You need to come in in the event of a claim if your product fails me. So.
Adam Parks (14:20)
I mean, sounds like a strategic decision, right? And insurance plays into the strategic decision between should I buy this or should I build it? And look, I did a whole webinar with a couple of AI driven folks that, we dug deeply into that particular discussion. I like your approach though. If you’re gonna use a third party, that’s also an opportunity to get within their insurance coverage and get some of that liability transferred off of you because they control the underlying technology, the underlying model, et cetera. I’ve been depending on the, these are all dependents, right? There’s so many different variables in this discussion. So we’ll try and keep it somewhat general here. But no real major difference in the way that you would view a policy if I was building this technology versus if I was buying it from a third party.
Tom York (15:10)
I mean, in terms of my own coverage, no. Really, the decision comes down to, do you want an AI explicit coverage policy? And I think one of the added benefits of this particular policy that I’m referencing is that there’s a third-party model validation. So the insurance carrier doesn’t want to get involved covering an AI.
And so it at least gives you someone else that’s willing to say like, yeah, we’ll put skin in the game here. We’ll, we’ll insure this. And you know, that would make, that would make me feel better. Maybe not even if I even, maybe I don’t even take the coverage or whatever. Maybe I just pay for the, the model validation to say, for someone to say that they’re willing to cover it. But yeah, I would, I would probably be, I would have a little bit There’s a lot of risk and headache associated with that.
Adam Parks (15:57)
Yeah, risk and there’s also a massive expense with maintaining that model, keeping it updated, dealing with the regulatory and compliance shifts that are happening into the future and being able to unwind some of things you’ve built so that you can adapt it and wind it back up again. So there’s a variety of risks associated with the build model.
Tom York (16:16)
Be in.
Adam Parks (16:17)
So I was kind of curious about like, it going to cost me exponentially more to protect myself if I’m building it internally versus leveraging third party technology that’s being covered in, you know, in other policies.
Tom York (16:29)
Yeah, and just to bring up a different claim scenario that isn’t collection specific, but it is something that you alluded to with regulatory pressure and there is some case law related to EEOC, the Equal Employment Opportunity Commission, versus a company called iTutor Group, which was a screening agent for jobs. Basically what happened there is they, had a model, depending on what your gender was, they were pushing applicants out who they thought were too old. It was like, here’s Tom York. Yeah, it’s like, here’s Janie. Oh, she’s 63. No, I don’t think so. It’s like there were 200 applicants that they went back and looked through it and said these people were otherwise qualified except for the model, so they were too old. And so there was a, I think I’ve got it here.
Yeah, $365,000 settlement to those particular applicants. But you could significantly run into a scenario where a regulator is planning that there’s a disparate impact, which basically means that you’re treating protected classes differently. So maybe this protected group is getting a certain settlement offered that this other group isn’t.
That can be a big deal proving that that’s not happening or making sure that it doesn’t happen. That’s just not like your traditional like, oh, third party disclosure or the AI agents that we’re going to send the police or something. That’s probably what people would ordinarily think about, but it gets a little bit more complex than that.
Adam Parks (18:03)
Well, as we think about going through the process of being underwritten and getting our AI insurance, is there anything that we should be preparing as organizations, governance model documentation, or is there anything that we should be considering that would improve the process as we’re gaining those new types of insurance?
Tom York (18:26)
Well, I think we have to walk before we run here. And I say that only because most of the agencies that I talk to, you ask them and they’re like, yeah, yeah, yeah, we’re gonna implement AI. And it’s like, okay, well, how are you insuring that? And they’re like, that’s not even something I’ve ever considered. And so the first step is just, as people are implementing this stuff, it’s to get them to ask the question internally or.
Adam Parks (18:42)
Fair point.
Tom York (18:49)
to themselves like, are we protected? What are we doing in the event of a claim? They’re just focused on maybe the resource, the expense wins and improved liquidation or something. But really, it’s kind of acknowledging that there is risk that hasn’t been there in the past. And so that’s kind of the first thing.
But in terms of policies and procedures, sure, yeah, you can build out governance policies related to, we’re not gonna have disparate impact. And it’s like, okay, yeah, you can say that. It’s gonna look good on the face, but.
Adam Parks (19:25)
You Fair point, fair point. So when we think about these tech enabled E&O policies and it sounds like the AI coverage is even beyond that type of technology. But when we’re looking at these insurance policies, we’re insuring against the outputs, at least in my organization, everybody’s like, we use a lot of artificial intelligence, but everybody’s responsible for the output of any AI model or tool that they’re using. Is that the same level of expectation when you’re looking at it from an insurance standpoint, or are they insuring against the output of the model itself?
Tom York (20:03)
So whether it’s insurance or even the folks that are suing you and the regulators, they don’t care about intent. They care about what actually happened. And so yes, that’s exactly right. You’re more concerned about the output than what you intended to accomplish in deploying the model.
Adam Parks (20:22)
And as we think about the breaches, at least from what you’ve seen in your research, what kind of instances have triggered claims and coverage as it relates to the AI insurance policies?
Tom York (20:38)
I mean honestly, we haven’t, there isn’t a lot of case law or claims history there yet. I guess the best that I could find was actually TrueAccord was sued in the state of Colorado for, it was rate caps related to tribal loans. And it wasn’t necessarily that AI caused it, it was more so that implementing AI can increase, drastically increase the number of consumers that are impacted just because it’s so much more efficient.
And so, if the model’s wrong or there’s issues related to the model, you’re going to hit a much larger group of folks with probably the same issue over and over again. It just expands the potential liability there so much greater than having humans do it. Because if Johnny says something, doesn’t mean Melissa is going to say something. Whereas it’s the same AI model talking to, interacting with a lot of different folks.
Adam Parks (21:34)
You know, it’s interesting because we often talk about how these AI tools are, providing us with scalability and there’s so many instances where like if you feed bad data into an AI model, you’re going to exasperate your problems exponentially. If you feed it with good data, you can improve your situation dramatically, right? Like it’s a magnifier, however we look at it. And I think that’s kind of a interesting point when we think about that from an insurance standpoint, because if it’s gonna go wrong, it’s gonna go wrong with great style and fashion. It’s going to be at scale because we’re trying to have these conversations or these communications, whether it be written or verbal, in a scaled way. That volume, I think, is really important. Now, as we think about the preparing ourselves as AI-driven organizations and as we’re all kind of adapting in that way, what should our expectations be of the cost of insuring AI versus not insuring AI. And I realized that asking that of an insurance broker is not the right question because it does require so many different variables, but I’m just kind of looking for like a thumb up. It’s going to be way more expensive, right? It’s, it’s not that crazy because you’ve got to at least an indication of what the average
Tom York (22:45)
Yeah.
Adam Parks (22:59)
collection agency is paying for these various insurance policies. I mean, how much more are we talking in terms of being able to cover that type of technology?
Tom York (23:11)
Sure, so an anti-liability policy in and of itself is probably going to be between 2 to 4% of the policy limit. So a million dollar limit is going to cost you 20 to 40 thousand dollars. Based on what agencies are seeing, and it really depends on, traditional cyber policies are going to be a function of how many NPI records do you have and, you know, what’s your revenue. And so, you know, startup agencies may buy a million dollar cyber policy for a few grand, whereas that same million dollar policy might cost 10, 20, 30 grand per million for a much larger agency. And so it really depends, although the pricing for the AI liability or the AI warranty is more of a percentage of the policy limit rather than MPI or revenue. But 2 to 4% of the policy.
Adam Parks (24:08)
That’s exactly what I was looking for. I was just looking for an indicator, right? It’s going to be more expensive, but it’s also going to have some dependent factors. There’s no clean answer to that question, but I was just curious as to how that was being viewed and as a percentage of policy limits, that kind of makes sense, at least until there’s more information in case law for them to refine their models.
Tom York (24:32)
Yeah, exactly right. It ain’t cheap, that’s for sure. And here’s my suspicion. Yeah, don’t I know it. And here’s my suspicion, is that as we see more claims as a result of AI deployments, it’s not necessarily going to come through from the compliance side at the agency level. It’s going to be on the creditor side in their MSA and what their insurance requirements are. They’re going to say, yeah, you have to have an E&O policy. And I’ve seen some where they’re like,
Adam Parks (24:36)
thing is.
Tom York (24:59)
yeah, you have to have a cyber policy, but it also has to have X amount of coverage for cyber crime. And the next thing you’re gonna see is, yeah, you’ve got to have an E&O policy, but it’s got to affirmatively cover your so I think that’s probably where we’ll see people getting more AI liability insurance. So, yeah, no one’s necessarily going out and going, man, how can I spend more money on insurance today? It’s always your clients that are going, hey, you gotta go spend more money on insurance.
Adam Parks (25:22)
No Agreed, it’s a function of our AI planning because as organizations that are deploying this technology having some indication, because the cost is not just buying the tools, it’s not just running the tools or the per usage basis or even the power usage of these individual models. If we’re going to look at the AI investment holistically, I think we have to include the increase at least of our insurance costs specifically related to protecting and creating coverage for those types of situations.
Tom York (25:57)
Man, everybody wants a Ferrari, but they don’t expect to pay $5,000 for an oil change.
Adam Parks (26:02)
True statement, but I think that’s exactly it. Looking at these costs, especially as we go back to the buy it or build it discussion, if you’re going to build it internally, you have to have your insurance baked into that cost structure in terms of what is it really going to cost me to deploy this type of model and technology.
Tom York (26:21)
Yeah, no, it’s very true. I mean, in terms of just our discussion before about AI can exacerbate problems and increase the scope just because of how efficient it is, you open yourself up to larger propensity and likelihood of class action. Because if AI model is doing the same thing every time and it’s doing something wrong, then there’s a lot of impacted folks out there. So for sure, you do not want to go out of business every night because of your AI model. It’s traditionally been recognized that if you have a large enough breach, it’s like. Game over, buddy. There’s not enough cyber insurance in the world that you could afford to insure against a full breach of a million or 10 million records. It wasn’t as, I guess the next best comparison would be class action for letters. You’re putting the same stuff on letters and if you get one of your letter backers wrong with regard to a certain state, then yeah, that opens yourself up to class action. But AI’s know, worse than that.
Adam Parks (27:24)
So I know we’ve covered a lot of information here today, Tom, and I really do appreciate all the insights. Is there anything that we didn’t talk about that you think our audience should be aware of as it relates to AI insurance?
Tom York (27:35)
No, I mean, I think we got pretty good coverage on the topic. It’s just the only thing that I would hope to impart or reinforce is thinking about it as you’re deploying these models. If you want to stay in business and you want to be competitive against other agencies, you’ve got to do something, right? You can’t just stick your head in the sand and avoid it. That’s not an option.
But be smart about it and really consider what you’re opening yourself up to liability-wise and more importantly what you can do to mitigate that as you’re doing the deployment. Is that risk transfer? Are you trying to get whoever you’re purchasing the model from or whoever’s doing the implementation to make sure that you’re whole at the end of the day? Or are you buying insurance to cover it yourself?
Adam Parks (28:18)
I think you bring up a lot of incredible points here and it’s something that I’m gonna really be pushing this episode to make sure as many people watch it. Cause it’s something that I hadn’t even considered until we were recording a Receivables Podcast together. And you brought up the point that like, hey, did you know your E&O doesn’t even cover that? Like you probably need to look at this and kind of prompted this entire episode.
Tom York (28:31)
You Yeah, well, I’m not saying it doesn’t cover it. I’m just saying it doesn’t explicitly cover it. But you don’t want to be the cautionary tale.
Adam Parks (28:46)
I definitely do not want to be the cautionary tale. But I really do appreciate you coming on, sharing all of your insights today from every conversation we have. I learned just so much. So I really do appreciate you.
Tom York (28:58)
Thank you for having me.
Adam Parks (28:59)
And thank you everybody for watching. If you have any additional questions you’d to ask Tom or myself, well, you can leave them in the comments below on LinkedIn or YouTube, or you can reach out to Tom directly because insurance is such a specific and variable driven item. You might want to just have that direct discussion, but you can leave comments here on LinkedIn and YouTube and we’ll be responding to those. Or if you have additional topics you’d like to see us cover, you can leave those in the comments below as well. And I bet you I can get Tom back here at least one more time to help me continue to create great content for a great industry. But until next time, Tom, thank you so much. I appreciate you.
Tom York (29:30)
Yeah, thank you.
Adam Parks (29:31)
And thank you everybody for watching. We appreciate your time and attention. We’ll see you all again soon.