[On-Demand Webinar]:
Breakthrough contract bottlenecks with AI and without added risk — Here’s how
What You’ll Learn:
Healthcare contracts are often complex, time-consuming, and high-risk, leaving legal and compliance teams under constant pressure. In this session, we’ll show you how healthcare organizations are overcoming these challenges by adopting AI tools that accelerate contract reviews and improve accuracy, all while maintaining full compliance and safeguarding human expertise.
Key Takeaways:
-
Standardize Language, Reduce Risk, and Streamline Workflows
Discover how AI can help you create consistency across all contracts, reducing errors and speeding up negotiations. By standardizing terms, you’ll minimize the potential for compliance issues and mitigate risks while streamlining your entire contract workflow. -
Strengthen Compliance with AI Integration
Learn how to embed AI thoughtfully into your contract lifecycle to ensure that all contracts meet legal and regulatory standards. You’ll understand how AI can spot potential compliance risks before they become problems, allowing you to stay ahead of regulatory changes and reduce risk. -
Use Automation to Complement Expert Legal Oversight
See how automation can accelerate routine tasks like contract creation, redlining, and approval processes — without replacing the critical oversight and decision-making power of your legal team. Automation enhances efficiency and empowers your team to focus on strategic, high-value tasks.
Join us to gain the knowledge you need to optimize your contract management process, reduce risks, and improve efficiency — all while preserving human judgment and expertise.
Giles, welcome everyone to today’s webinar. Breakthrough contract bottlenecks with AI and without added risk. Here’s how my name is, Giles, Bruce, Assistant Editor for Becker’s healthcare. On behalf of all of us, thanks for joining us today. Before we begin, I’d like to walk us through a few quick housekeeping instructions. We’ll begin today’s webinar with a discussion, and we’ll have time at the end of the hour for a question and answer session. You could submit any questions you have throughout the webinar by typing them into the Q and A box you see on your screen. Today’s session is being recorded and will be available after the event. You can use the same link you used to log into today’s webinar to access that recording. And if at any time you don’t see your slides moving or have trouble with the audio, please try refreshing your browser. You can also submit any technical questions into the Q and A box. We are here to help. We want to thank Ntracts for sponsoring today’s webinar. N tracks is the leading Contract Lifecycle Management solution built specifically for healthcare organizations with 85 plus years of industry expertise and track street streamlines the healthcare contracting process, reduces compliance and financial risk and supports organizations strategic goals through built in best practices, automated workflows and user friendly reporting tools and tracks is committed to serving our customers compliance needs by continually incorporating built in best practices that stay ahead of the ever changing regulatory and technological landscape. In March 2025 and track expanded its suite of solutions with the acquisition of comply, a trick, a platform for healthcare organizations to manage their regulatory accreditation and administrative compliance needs together. These solutions help healthcare organizations operate more efficiently while staying ahead of evolving regulatory demand. Learn more@ntrax.com
01:58
Also doctrious is a leading contract negotiation platform that powers contracting for rapidly growing startups and large enterprises, with its headquarters in Houston, Texas, doctors operates globally and across
02:14
every major B to B vertical. With that, I’m pleased to introduce today’s moderator, Stephanie Haywood, Senior Vice President of Sales and client engagement at N track. Stephanie, thanks for being here today. I’ll now turn the floor over to you to get us started. Great. Thank you so much. And welcome everyone this afternoon, we’re so thrilled that you’ve all joined us today. I’m so excited to be sitting here with my friend Hanel. Patel, CEO of Doc juris and Lily Ha, VP of product at Ntracts, I’d like to take a couple of minutes to introduce each of them today. Lily is a vice president of product at Ntracts. Lily brings extensive experience and product leadership, where he’s led the development of several and various FinTech products, including launching embedded software solutions, incrementally improving a mature product, and conceptualizing and finding product market fit for brand new products that target new end users. He’s also built out operational processes and internal and external customer education resources and resources to ensure products scale to meet ongoing demand.
03:29
At intra slowly focuses on solving urgent market needs by building technologies that empower healthcare organizations to manage contracts more efficiently so they can stay focused on delivering high quality patient care. He holds an MBA from Harvard Business School, and is passionate about building inclusive, high performing teams that create impactful tools and strengthen legal and compliance operations in healthcare.
03:55
Now is the founder of CEO of Doc, founder and CEO of Doc, Garris, as mentioned, an enterprise AI company based in Houston, Texas that’s really redefining contract negotiation for mid to large organizations, and since launching doctors in 2018
04:14
he’s grown his team into the leading provider of AI powered redlining and review workflows that accelerate deal cycles while controlling and mitigating risk. Before founding doc, juris hanaul spent nearly a decade as an in house counsel for healthcare and energy companies in Texas, where he led transactional process practices and pioneered early AI applications in legal operations and all Lily, thank you so much both for joining us. I’m really excited about this conversation. Before we get started, I’d love to just spend a couple of minutes polling the audience to really understand a bit more about who’s participating today so that we can really work to tailor.
05:00
For the conversation a bit. So just to start, if you could share what your role is within your organization, and we’ll let this sit for just a couple of seconds while we understand the audience, lots of health systems, if you’re in it or tech, if you’re outside of healthcare as well, please be sure to hear that as well.
05:33
Okay, well, let’s take another look. Looks like we have a lot of in house counsel today, several compliance offices pretty evenly split, and consultants. So looking forward to this discussion, I think it’ll be incredibly impactful as we talk about how AI is being
05:50
integrated and included in your legal and compliance processes. Our second question, we’d love to know what you’re most interested to learn about so how to speed up contact review and negotiations with AI, with your conversations that we have frequently, if you want to learn to reduce risk and improve compliance using automation, which, of course, we’re going to touch on today, how to balance those AI tools with human legal expertise, which I think is is going to be a key topic. And really excited to have him all and Lily dig into that a bit how other healthcare organizations are successfully um, use using AI in contracting, or really, how to get started in implementing AI into your contract lifecycle processes. So we’ll give everyone just another minute
06:42
to respond, and we’ll take a look at those responses.
06:55
Great. So looks like the majority of our individuals and participants want to learn how to speed up contract review and negotiation with AI, followed by how other healthcare organizations are successfully using AI in contracting, and then how to get started implementing it into your CLM process. So we’ll definitely be touching on all of the topics today, including those three. And finally, and I think, a discussion that we have all the time, what is your level of comfort with using AI in a professional setting? Are you very comfortable? Are you using it every day? Somewhat comfortable? But really don’t know the organization’s guidelines. And of course, we’re hearing that throughout the industry, you know, as AI, committees are being formed, are you somewhat comfortable? Do you occasionally use it? Or are you not comfortable at all? And so we’ll give everyone just another second to respond to those. Yeah,
08:03
Kyle, Okay, looks like people are somewhat comfortable and occasionally use it, and others about 21% very comfortable and using it every day. And so excited to have this discussion hear how it can be incorporated into your day to day practices. Thank you all for your response to the poll questions, and let’s jump into our discussion panel. Lily, the contracting process in healthcare is notoriously complex. We can we just start by having each of you share maybe a couple minute overview of some of the biggest pain points that you’re currently seeing in the industry when it comes to contract bottlenecks and compliance challenges and Lily we’ll start with you, if you don’t mind. Sure, I think as we’re talking to the market and our n tracks customers, we keep hearing that people are being asked to do more with less, right? You’re you’re reviewing 30% more contracts, but your team is not growing by 30% I think that’s really struck me is, you know, where, where the complexity and the amount of work and the volume are both increasing. The second area that I think separates health, the healthcare industry, from any other industry is how much M and A activity there is. So aside from everything just being complex on its own, you’re also trying to combine compliance and legal efforts between two different organizations, and those are really unique challenges to the healthcare space.
09:36
Thank you, Lily and all.
09:39
Yeah, thanks, Stephanie and Lily, I think you hit on something critical here around scale. I mean, I’d add that in healthcare, we’re seeing not just more regulations, but a real explosion in volume and stakeholder involvement. I mean, so for example, you have privacy and data use rules layered on top of anti kickback state level patient access mandates. You.
10:00
All of which means legal teams are often swamped with interpreting requirements, let alone embedding them into contracts, right? So you
10:08
know, and unfortunately, though, I think you’ll be surprised at the amount of low value work that’s happening around this complexity. So for example, on average a contracts professional spends about two hours a day copying and pasting and finding word and whenever I tell our clients about the statistic it’s I always get nodding heads like, Yeah, I do that every day. And you know, at the same time to Lily’s point, timelines keep shrinking. Folks expect turnaround in days, if not hours. And as we saw in the poll, over 54% are interested in learning how AI can speed up the review process. But the challenge, though, is that playbooks and frameworks to guide those reviews haven’t kept pace, and so there’s this real mismatch that creates a perfect storm. Contracts bounce between clinical finance, IT security, Q and A back to legal, often by email chains, clunky Word documents, and by the time you surface a key obligation, let’s say a new Breach Notification clause, you already have lost weeks in a crude risk. So I think, in a nutshell, the big bottlenecks we’re seeing are regulatory complexity without corresponding process maturity. And then, from our perspective, you pair that with over reliance on manual redlining tools, sort of like this perfect storm for inefficiency.
11:20
Yeah. Well, well said hanale and Lily. So you know, we’re continuing to hear in the market and from clients that because of the challenges and some of the statistics that you’ve talked about, that they’re really looking to implement AI solutions that can help healthcare organizations do more with less, help mitigate risk, make sure that we’re not getting on down the contracting process when be where at that point we’ve identified that, you know, we’ve got an issue that we probably should have recognized previously.
11:54
And we really also want to enable our teams to help mitigate risk of non compliance as healthcare organizations and legal professionals compliance officers look to move to AI, what are the
12:12
things that they could be incorporating or thinking about when they’re evaluating AI features, specifically when it comes to contact, lifecycle management and and not just the technology, but those processes overall. Lily, perhaps we’ll start with you. Yeah, I think you really want to think about what your goal is. Are you trying to raise efficiency? Are you trying to mitigate risk? Are you trying to do all those things at once and really guide your how you’re evaluating solutions based on those goals, right? Your goal isn’t just to get AI into your organization. Your goal is to actually drive towards an outcome. And so you want to really partner with people who have a strong perspective on how they might be incorporating that AI to get you to that user outcome you’re looking for. And I think ntrax feels strongly about how we develop AI features and doctors too, and that’s why we’ve been such good partners. But I think, you know, one you really want AI to show its work. Like, think back to grade school, when you would get to a math like a math problem, and you get to an answer, and your teacher is like, show me your work, because then you can sort of follow any of the logic leaps. And as a critical thinker, you can say, Oh, I don’t know if I believe AI in this particular situation. So one is, show your work. Two is, I think you really want to make sure that the AI suggestions, the AI itself, is really integrated with the rest of the product, that makes it more user friendly, that makes it
13:40
easier to incorporate, and not something that you have to think about doing on the side.
13:46
And then third, I think, is really you want to focus you like, we’re all healthcare professionals here. Well, some of us are actually in, you know, operating healthcare organizations, but we’re always trying to think about the healthcare context, because you want to really apply these if your goal is to mitigate risk, you want to apply these tools to the risky areas of your organizations, right? You want somebody who understands that in the software. You need to treat a vendor that has access to phi a little differently than a vendor that doesn’t right that they have different risk profiles,
14:21
absolutely. And I think you hit some you touched on a point, which is really, you know, not only do you need to make sure that you’re making taking these considerations in into account, but also you talked about how we partner. We chose to partner with Doctor jurors because they have a lot of the same thoughts and approach, and so I’m hopeful that you could talk to us a little bit about, you know, Dr Garrison, and how you and your approach and your technology differs from other organizations that they may encounter. Yeah, it’s funny. I’m probably gonna end up saying it back a lot of the things that Lily said. I mean, I couldn’t.
15:00
Agree more. So I may sound repetitive here. We really lean into specialization, rather than broad conversational, AI. So if we get a little bit more specific into transparency, domain expertise and composability, which are, I think, the main themes of what Lily just said, we built our platform around the review process, the contract review process, which means we’re laser focused on that. So if we double click on, for example, transparency, every suggestion that Doc juris makes comes with a clear why in sort of this feature rich editing experience. So you can click into the source clauses from your playbook, review, commentary, see which precedent language was used and why, and trace back to logic. I think conversely, if you look at chatbots like copilot, chatgpt, and lot of I know there’s some folks that are very familiar with AI, although those tools are very exciting and interesting, they can create more risk, because it’s truly garbage in, garbage out. I mean, you’re at the whim of how that particular model was trained, which version of the model, and, quite frankly, the prompt that sits on top of the model in terms of what you want it to do. And so that kind of leads into domain expertise, which for us, our models aren’t just generic NLP tools. They’re really developed with guardrails specific to a customer’s risk framework. And I think you’re gonna hear a lot about that for the rest of the webinar. I mean, it’s, it’s a, it’s a specialized focus that flags the right obligations, not 50 generic risks.
16:29
And I think to touch again on what Lily said, composability. So we know that legal teams often bake doctors into larger CLM products, like in tracks, which is why we work really well together. So we expose our AI outputs by APIs, widgets, checklists that slot right into your existing workflow. So there’s no switching context, and a lot of that’s going to change here in the future too. But the end point here is that you get a guided step by step review process with built in guardrails for risk and, you know, setting aside doctors. I mean, if you if that’s one thing, you take away today, you know, step by step review, guardrails built in for risk management. And you know, this hands on checklist to validate every change before it goes final. I think that’s key. And when you put it all together, and this applies to any sort of application of AI, in our view, speed and velocity are different. I think the one thing people say about AI is, oh, it’s making me faster. I don’t have to think as much well. You know, the fastest way to get a contract done is just to sign it, right? But I think 33% of the people on this call that are the lawyers or, you know, are would not agree to that so. But in that same vein, you know, you also wouldn’t run an AI Redline and just send it out without looking at it. So that’s how our not how our product design. So it’s all about contracting without compromise. So leveraging these tools, leveraging the technology, but doing it well. I think that’s the, I think the big important theme here for today. You both, you both touched on several, several key points there, and one of them really kind of validating the AI outputs. And I think it goes back to, you know, really made a comment about, you know, showing your work. And all, I know, you all take that same, that same approach, when, when you think about how you allow your interaction with with the with the solution and ultimately the incorporation between the workflows. Would love to talk a little bit about some of the most common misconceptions around using AI specifically for contracting in highly regulated industries like healthcare. We’ve also got several compliance officers on on the on the call as well. So in addition to the lawyers and the compliance officers, probably want to hear about, you know, how we can make sure that we are addressing those misconceptions and and continuing to mitigate, mitigate risk. And I’ll start with you,
18:57
yeah, there’s, there’s so much I can say here. You know, there’s been a spectrum right at the early and we’ve been doing this for about seven years, so early, early on, the misconception was, I didn’t know AI could predict the next word, right? It was, there was a misconception around how that works, and now it’s sort of swinging back the other way, where
19:16
I think people treat AI, in some cases, like a legal Oracle. Hey, if you feed it a contract, it’ll spit back the perfect laws. Or, Oh, it’ll just know it if I give it some information. But llms are just pattern predictors. Before it was sort of classic
19:32
tagging of data to create these prediction models based on information that’s tagged. So in a nutshell, these AR models, they don’t know the law. You really have to treat every suggestion as a first draft. So built in audit trails, human review steps, clear checklist so you can validate, override those things that that look off given the overall context of a deal. Is a big, you know, misconception. I think second, there’s this idea that accuracy is all about model size.
20:00
Or endless fine tuning, or I can just fine tune the model and it’ll be fine, but you can train on every healthcare policy in the sun, but the return quickly plateaus. In practice, you get way more mileage from
20:14
just more thoughtful prompt engineering, retrieval, augmented generation against your own clause library, and you know, I think a guided UI that steers through these risk hotspots, and again, garbage in, garbage out, software at the end of the day will be the biggest lift. I think Third, there are the there are some legal tech companies that claim that they have their own model, right? And I think the reality is that most organizations stand out foundational models like GPT, Gemini, anthropic, and there’s a bit of a race to the bottom happening right now, and the competitive edge really lives in the software and the governance that you layer on top, not in forging a new base model. I think that that’s a that’s a big misconception here as well. So bottom line, you know, AI and healthcare contracting is a powerful co pilot, but not a solo pilot. You need the right processes, transparency, domain expertise, to make it deliver on the promise that you’re looking for.
21:09
Yeah, I think, oh yeah, I can jump in there a little bit too. I mean, all made some really great points. I think that this is where we see specificity can get more higher precision leads to higher precision, and that’s why we think healthcare, contacts and and all that stuff really can lead to a better user outcome. Actually have a funny story about this whole garbage in, garbage out, and why you need a human to really double check people’s work. So I was helping a friend try and calculate the interest payments on this loan. It was part of a contractual document, and Chachi PT said, base, you know, you’re going to be paying this amount in interest. Well, you know, how did you get that? Well, you know, given April has 31 days, your interest payment will be this and, well, April obviously does not have 31 days. It is very well documented that April has 30 days, but because of LLM is really just predicting the next word. It might have thought April was a month, and some months have 31 days. And that’s why I brought in sort of that, that wrong, that wrong number. And so things as concrete as how many days are in April can be misinterpreted by AI, and so you really want to think about that kind of application for a clause, right? And that’s where, where I think Dr juris has done really well, is making sure that you’re you have those playbooks and those things to sort of train that model. It doesn’t have to be a lot, it can be small amounts of data, but it’s very tailored to your situation, and really puts those guardrails on that model. Thank you, Lily and helping ensure that you know something that we’re always wanting to make to ensure is mitigating that compliance risk in real time. So
22:57
we know that Oregon healthcare organizations are actively working to stay ahead of evolving compliance regular regulations, and they’re also looking for ways to streamline their processes and incorporate AI and incorporate, you know, Contract Lifecycle Management Solutions, etc. So how can AI support this effort and mitigate that risk, really, in real time?
23:21
Yeah, I think,
23:24
I think a lot of times, people will talk to me about, how can AI help me? And I think that’s kind of the wrong question. You’re really, you’re it’s a solution in search of a problem, almost. I think really the question you’re asking is, what can I help standardize and automate so that I can reduce my risk and make my my my organization, more efficient. And I think big picture, really, compliance comes down to culture, right? What are all those business stakeholders in your organization doing when you’re not breathing down their necks about policies and procedures and your processes, right? What are those choices that they’re making and so, and I think it’s especially hard for the healthcare industry right now because M and A is driving so much of that organizational change. You’re trying to bring in multiple groups of people who may have operated differently together. And so really, you’re looking for tools that could support people and make sure that everyone is sort of operating they might not know what the next right step is, and they’re just guessing. Well, if you bring in tools that have that standardization in it, it can include AI, or it can, or it might not, but that’s really where you’re gonna get the bang for your buck on compliance and risk. And so I like to think about sort of leading indicators of compliance and lagging indicators. So a lagging indicator might be you’re trying to get everything ready for an audit. Well, if you bring AI into that, maybe it can speed up your audit process and getting gathering all that information. But at the end of the day, is it really making your organization less risky? Because all that stuff has already happened, right? You’re just reporting on it. That’s a lagging indicator where maybe you really want to focus.
25:00
Us, Your efforts are on those leading indicators, things like, how many compliance processes do you have? How are they best of best in class. Type of processes are they modeled off of best practices? Can someone who is acts like new to the organization accidentally skip a step? What percent of your contracts are going through those processes? How can you look at statistics? Like, this is more real time, but like, how can you see that? Oh, in April, we actually ended up not being able to do as many contracts this month and we did in March and and why? And like, be able to dig down deeper, right? So if you have AI to help in those type, types of indicators, I think that’s going to lead to a much more successful compliance program.
25:45
Thank you, Lily. So you know, obviously compliance can be achieved through software and solutions, not necessarily requiring AI other In other instances, we’ve seen where AI is incorporated into those both regulatory compliance and organizational compliance and governance processes, while considering the people component or the the human element in combination with that technology solution. And we you’ve both touched on it before, but Hello, I’m wondering if you could just add a little bit more on really, what you’re what you’re seeing, and your approach and in your discussions. Yeah, love the thoughts, so I agree. I mean, compliance is first and foremost about people and culture. I mean, when you, for example, compare contract life cycles to a patient’s care journey, you have handoff, check in, discharge, sign off, everything needs a clear protocol, tight coordination. So AI, by itself, is never going to be a silver bullet for that organizational misalignment. In fact, two birds, one stone, solve the people and culture
26:51
first as you deploy
26:53
AI. So that said, when you legislate against rich risk, meaning you are codifying your guardrails up front. For example, the lily little example, where, hey, by the way, just so that, you know, April has 30 days, right, I think. And when you have a living compliance playbook, AI, can absolutely crush it and become your real time Sentinel. And that’s really where things are going. And at doctors, we think about step one is defining those guardrails and those inputs. I mean,
27:22
one joke that I tell a lot of our prospects, friends and colleagues, is, you know, AI is like a baby shooting lasers out of its eyes. You often need to grab that baby and point in the right direction. So your legal compliance teams need to map out clause by clause what’s acceptable under all the different rules, HIPAA, state privacy laws, Bas, etc, and and that playbook then feeds a review engine, which should continuously scan incoming contracts flag deviations and gaps, and that’s where the promise line is right. But for the 55% that are interested in contract review.
27:55
Now, of course, you still need people in the loop, and that’s why it’s important to bake in regular checkpoints to review what AI is flagging, adjust the thresholds, update the playbook as regulations evolve. So you know, my last point here, I think, is that we should push back on the idea that we can feed some magical AI a bunch of data on a continuous basis, and somehow that leads to the promise line. I think that’s what little and I here agreeing with. I think the problem here is that that this link can also perpetuate mistakes or decisions made under pressure and increase risk. So and in the AI world, there’s this term called AI drift, when you rely on a bunch of inputs and machine learning, and now the model is drifting because maybe we were under pressure.
28:39
And so, in summary, you know governance is key, and legislating the governance will prevent a lot of these concerns that legal compliance teams have with with AI.
28:49
Let’s talk about Thank you. Had all let’s talk about some real world examples of how organizations are successfully using AI powered solutions and individuals to streamline their processes and halal, maybe we can start with you hearing a few examples. Yeah, I can lead with some low hanging fruit, and I’ll describe it in the context of doctors and how we’ve deployed it. But there’s some low tech solutions here that you can deploy on your own as well. So
29:18
number one is what we see a lot is automating sub threshold reviews, and so we have this one Regional Health System use used to waive legal review on purchase orders under 10k but underneath the surface, they’re worried about hidden risks like copy left clauses in their imaging equipment or indemnity
29:39
mismatches, mismatches in their data use agreements, which both present risks that go beyond the purchase price naturally right for the compliance folks in the room. And so by integrating AI powered screening, they can now auto screen those low value contracts against those
29:57
high risk areas. So about 85%
30:00
End up getting a clean bill health and bypass,
30:04
while the remaining 50%
30:08
are so the result is that there’s, you know, they were able to slash risk on contracts that were not being reviewed anyway. So that’s a, I think, low hanging fruit. Another one is standardized term, standardizing terms. So going back to governance, I mean, another healthcare system was spending, I think, days manually hunting for deviations and vendor MSAs, particularly around warranty clauses and liability caps. So the real world application here is using
30:35
AI like Dr is to flag non standard warranty carve outs liability limits. So they went from this four day average negotiation cycle down to 24 hours, and then meanwhile, embracing and empowering other
30:49
departments to look for those things on the front lines and then moderating the review. So having a process there, so at scale, this really is valuable, because unfortunately, Microsoft Word is where a lot of data goes to die. So even if you have great processes on the front end, it’s hard to roll that information up to a dashboard. And you know, where is it that we are always fighting in our agreement? So not only leads to good front end review, but management can look at the bigger picture and strategize around the types of risks that are the most important. And I think the final one is just reclaiming bandwidth. So we’re working with this mid size pharma company. Attorneys are previously logging eight hour reviews on every MSA, so our guided checklist on both the incoming red lines for their template, as well as third party paper, they’re able to knock out that first pass review in about 90 minutes. So we talked a little bit about velocity, but the real benefit here is that the extra capacity was able to allow their legal team to focus on more strategic projects versus the drudgery of staring at fine print all day. So there’s a lot more that legal teams can bring to the table in terms of strategy and being a partner to the business. And I think that’s a pretty big low key benefit that AI can can bring to many organizations.
32:05
Thank you, Lily. If you want to add, yeah, I think your last example, like, we hear people say they want to work to the top of their licenses. They don’t want to be stuck in the mud, right? And they want to really be thinking about the heart and untangling the hard. And then the other piece like to really speeding up some of the contract reviews is we talked with a customer who was like, we are signing contracts for band aids and laser beams. And you know what? I don’t really care about those Band Aid contracts. So those we might be able to get through with a doctor as but, but the laser beam contract, I really want to be spending more of my time on and, you know, thinking about that 30% increase in contracts like maybe the those, the 30% that are on band aids and gauze and that kind of stuff, maybe doesn’t need as much attention. So
32:58
on the post signature side, I think really where people have coalesced is around this, like aI assisted contract abstraction. So this is the idea of, when you’re putting things into a repository, you really want to pull out important features that are important data points in the contract so that you can find it later. You can do things about that, that contract later. And so I think initially, when we talk to
33:26
people who are using this feature, they think that this is going to save them time, and it is, but it’s not really. There’s one instance in which it does save time, right? We have this lady who their organization just underwent a merger, and she has this huge folder of contracts that she needs to load into the into our system, and it’s never been a high priority on her list, but now that she has sort of this AI assisted contract abstraction, like, whenever she’s got 10 minutes, she can just bam, bam, bam, get through her to do list a little bit faster,
33:59
but really, where the ROI comes in on this is reducing risk, right? Making sure that if you’re not the only one putting contracts into your contract repository, that that these other people, like spread across the entire organization, are actually putting in those data fields that might kick off another action, right? And and they might not be there in a year from now, but you still are going to need to manage the life cycle of that contract going forward. So things like, are you sure that you didn’t misread whether or not this this vendor has access to phi? Oh, because if you answer that incorrectly, you could be missing a baa, and that puts you at risk for fines and the penalties, as we all know, we talked to other customers who say, You know what, I wasn’t thinking one day, and I accidentally, you know, fat fingered something in the AI caught that I actually might have put in the wrong auto renewal date, right? And that is just ROI in and of itself, right? That pays for the entire.
35:00
Air tool.
35:02
Thanks, Lily, both you and hanale shared some really fantastic examples, and I think we could spend a lot of time with other talking about other real world examples
35:15
for those organizations that are looking ahead, maybe they haven’t already adopted AI into their CLM processes, or they’re they’re on the journey, and they’re on the path, but they’d like to continue that. What are some of the steps that you would recommend to really ensure a smooth transition? And Lily, if you don’t mind, we’ll start with you. Yeah. So I think first is you want to identify the features and solutions that are going to help you catch that risk, right? Imagine yourself like I’m talking to all the compliance and legal people on the call. Imagine yourself at your worst, like you’ve gotten norovirus, you’ve been up all night. What areas of your day to day do you want to have extra backup? Right? That’s really where AI is going to help you. And I’m sure, as you’re thinking about it like a majority of those are pretty well defined use cases, and they’re going to be fairly specific to health care, like if you have other attorneys that that, or attorney friends, they might not also be thinking about the same things that you’re thinking about. And so therefore, your chance of getting a precision from AI can actually increase. I think you also want to be thinking about how user friendly a tool is, because that will help you with the rollout and adoption. I know a lot of people here are very comfortable with AI, but I think as you try, if you want more and more people to use that AI tool, you want to make sure that’s not scary and intimidating to them. Because sometimes we, we
36:45
those, those like, Flash beta and they, they flash things that make you unsure if, if what you’re doing is going to be permanent and or, like, mess up something downstream that you’re not aware of. Um, second, I think, is just really be skeptical and know that there is some level of interpretation here, right? You’re asking a tool to weigh in on something that you might have been well trained to do, like you have experience, you understand nuance, and anyone that’s using this tool may not have had that those experiences, but they’re still responsible for that output, right? That’s the culture of compliance. Everyone is responsible for compliance, right? And that’s that’s picking tools that will help show work, show its thinking, its logic, to the people who are using the tool. And you know, you really want to be thinking about, is it enhancing guardrails that you already have in place and not confusing the user because it’s, like, overly complicated or not helping there.
37:46
It’s fantastic. Thank you, Lily and all, did you have anything that you’d like to add?
37:53
Yeah, I think when it comes to,
37:57
you know, one of the things that we like to think about when designing this type of an implementation involving AI, you know, at the end of the day it, it often comes up, comes down to how well you’re able to
38:11
rally the people around the process. And, you know, I know that’s a kind of a hand wavy thing to say, but it becomes really important to design up front and ask the hard questions around,
38:23
where’s the for example, where’s the volume? So one thing that is highly beneficial for us when we’re thinking about implications is, you know, we don’t have to conquer every problem up front. Let’s try to chip away at this in sort of an iterative fashion, and let’s not bite off more than we can chew, as you say. So I mean, I think that would be one thing I would, I would add to that. Thank you for knowledge completely aligned there. And I know we have that same approach as we as we’ve talked about before. I want to leave some time for Q and A but I’d like to go ahead and start thinking about looking ahead. How do you see AI evolving, particularly in CLM, and what should organizations be thinking about and preparing for within the next couple of years? Lily, we’ll start with you. Yeah, I think we talked a lot about contract data abstraction already, and sort of the value in managing the future
39:22
life cycle of a contract. Other areas that you know, there are a couple use cases that have been bubbling up to the surface. So we hear a lot about contract summaries, and I think where we really see success in this is when people are using this to give an overview to an exact like somebody who’s going to use that information at an arm’s length, maybe they’re part of the signature signature process, and just want a quick here is what this contract is about. But as everyone on this call knows, you know the value is really in the details and in the nuance. So you have to decide whether or not that’s really beneficial.
40:00
Example to your organization.
40:02
The second area that we hear a lot about, actually, is self servicing on questions on specific contracts, right? So this is really a chat bot that can help you translate clauses and not you, but business stakeholders like helping business stakeholders answer questions about individual contracts. And I think what you got to think about here is, do you want your organization to get advice or interpretation from an AI model, like, what are the actions they’re going to take from those answers? And will those put you at risk or not sometimes? And you will have a good gage of that based on, are you getting pinged with really simple questions. Do you have people in your organization that sort of fly, you know, go off the cuff and just take actions without talking to you first? And depending on sort of the culture, I could see this very much helping people so that you’re not getting hit with all of these interpretation questions you can only you can mainly focus on the more sophisticated questions and save a little bit of time that way. If that makes sense. Third is, we really see a lot about clause searching here. So you’re not searching for a single contract. You’re not choosing upfront, what you might search on later. This is tariffs just went up. And now I want to go pull out all the tariff pricing from all of all the contractive contracts we have today.
41:29
This is really about trying to pull a big list that you can take action on this, like, body of contracts going forward. And it’s pretty ad hoc. This, I think, is a little bit more on the on the edge of, like, feasibility, I think so we’re definitely excited. This is a really cool problem, and I think high impacts to an organization.
41:52
Thank you. Lily hindal, as
41:55
the founder and CEO of dot Garrett, he certainly saw the future trends several years ago when you started the company and talk to us about what future trends you continue to see as we as we move forward.
42:08
Yeah. I mean, if I go back to the baby shooting lasers out of its eyes, I mean, just on a very basic level, that baby is going to grow up, right? And I think just to give you some a really interesting data point that everyone can sort of think about, chew on over lunch is, you know, when chat GPT three came out, or two, even the early stages of
42:30
the model, there’s only so much context that you could put into the into the, you know, into the chat, and there was only so much response you get back. You know, there’s tokens in, tokens out. You know, some of the most recent models that have come out this week, like, actually, literally, last week,
42:47
you can put in over a million tokens of context, which is the equivalent of 20 novels. So whether it’s going to do it well and accurate, will will be just a matter of time. You know, I think that the freight train has already left. I mean, this is going to be the future. The models are going to get better. They’re going to get more accurate. So I think, just foundationally, I think that that’s everyone should just take that away, is that these models are just going to get better. There’s no question about that. We’re seeing it daily in our in our own research since we started the company in 2018
43:25
but if we, if we get a little bit more specific, I think first embedded task orchestration. So today, you can ask a bot to write you an email to to give a summary, or, Hey, tell me what the tariff risks are in these 14 contracts. Tomorrow, it’ll be cool. Now, what would you like me to do with that summary? Do you want me to draft a Word document? Do you want me to spin up a spreadsheet? Do you want me to assess your balance sheet and highlight certain rows where there’s risk and how to comment update CLM fields, kick off an E signature workflow that’s moderated all without you jumping between apps. So there’s this sort of multi modal or called
44:02
Task orchestra. Orchestration, that will be the next use case. And a lot of that’s just software, by the way. But as the models get better, and as software companies like ours start to embrace it more, you’re going to see more application. And
44:16
so it’s software around these models. And as these models get better, the software is also going to grow. Second is, I think, contextual awareness. So AI will be able to pull in data from
44:28
EHR billing systems, real time analytics. So
44:32
for the lawyers in the room, you know, I still believe that today, AI is not here to replace advocacy, negotiation and context, but I think context is probably the first domino default in the near future, so there’s already early evidence of this being the next big thing in enterprise applications. So
44:48
instead of a standalone clause markup and analysis, you can get insights like, hey, this pricing amendment you just drafted aligns with your top revenue services line, or
44:58
this liability cap May.
45:00
Suppose you based on that recent merger, speaking of M and A, so things like this are going to be truly game changing. And are happening now with our customers is just collapsing that, that context gap that we have because of how involved we are in different aspects of the business, I think third is just autonomous governance loops. I mean, once you’re able to define your compliance playbook, AI will continually monitor live contracts and fly deviations in real time, spin up tickets, send notifications. And so there’s going to be money. It’ll be much more agentic. And so even suggesting update clauses for legal to review before they go into production. This is some of the things that we see and some of the more crazy things that we think and anticipate in the next decade or two, is agents negotiating against each other in a contract review without people in the room. So I mean, these things are going to come. It’s just a matter of time, and we just have to be prepared. So I think today, if you’re a healthcare organization, you know, first start by unifying clause libraries and templates, standardizing your appetite for risk, mapping out the end to end process, so when these smarter, task driven, context rich agents arrive, which we feel strongly is inevitable, you’ll have the plumbing in place to execute, and people will remain An important part of this puzzle. It’s just our skill set has to evolve, right? It’s the practice of law has to evolve toward this. And we saw this with typewriters to Word Perfect to Microsoft Word. And now this is the next this is definitely the next leap.
46:35
Thank you, Hannah, all we I want to ask just a couple of questions that we we have you’ve both talked about the people and incorporating people into the process, and have both talked about AI really can’t be blindly trusted, and that validation is key. And both of our companies take that that same methodology and approach, if you could each just touch and maybe we’ll start Lily with you on what specific guardrails or review processes. Have you seen work best in in healthcare, with healthcare legal teams and compliance teams to safely operationalize AI,
47:12
yeah, I think it’s really where you’re already doing the same process. You just maybe have something filled out pre filled out by AI, right? So you’re reacting to something instead of having to generate that response yourself. And I don’t know about you, but it’s always easier for me to correct something than it is for me to come up with something to begin with. And so I think that is, you know, you’re, you’re, hopefully your team is already bought into a tool, and it’s really just trying to use that same tool, but in a sort of faster manner.
47:49
Thank you, Lily and all a question for you. You talked about prompting AI effectively to get better results. You talked about, you know, the future state of AI.
48:03
How can, or do you have examples of how legal teams can structure prompts to ensure that as they’re starting to think about usage of AI, there’s consistent, reliable outputs in particularly when they’re reviewing contracts? Yeah, it’s definitely an art. It reminds me of, I don’t feel remember when Google came out, right? There are the haves and haves nots, right? The apps are those that could tap into Google and they had that magic wand, right? And I think that’s prompt engineering is actually much more complex than a Google search. But, you know, I think let me first start with
48:38
some some wrong assumptions. So some people assume, and I think this goes back to misconceptions that, oh, I can just feed this thing a bunch of data and then it’s going to produce a result. For me, I think the there is a very important component of prompt engineering that everyone
48:58
that’s in the room should learn. And there’s a method to prompt engineering, and there’s so many articles and best practices, but one thing to think about is chain of thought. So
49:11
it goes back to the April example, right? So the more of a nudge you give these AI models, the better nudge it’s going to give you back. So
49:21
and so when we think about that practically, when you’re building your playbooks, you should think about playbooks as an amalgamation of the criteria that you’re identifying in a contract. So let’s say I’m looking for governing law. What are the outcomes that I’m anticipating if it’s found or not found, or Delaware versus New York, versus the state in which your healthcare organization is, and then how do we respond to that? So you have to kind of think about it as an expert system. And so it’s these expert systems embedded in these prompts that really, I think, makes it more consistent and reliable.
49:55
The problem is, is when you assume that very generic, unstructured text, because.
50:00
Is going to result in better results, or more of it, more is not always better with these models. So the more direction that you provide an AI model up front, I think ultimately you’re going to get better results. We have a client that came to us with
50:17
a mountain of historically signed agreements, and they said, Can we use this to train the your AI to automatically mark up third party SaaS agreements? And we said, sure. But you know that playbook that you have on the side that your legal team wrote is actually more effective for the model to interpret and do it well, because the model, the prompt engineering in the playbook serves as the rules of the road, and then the historical documents are how the rubber meets the road. So it’s that one two punch that becomes really important. But, yeah, I think prompt engineering is an important skill set, but it’s also, you know, just being really specific in terms of your your needs. I mean, one, one simple trick is, what are your goals? So you can try this today and copilot, what’s your goal? What is the output that you expect, and what are some warnings that you want to advise the AI model to think about, and then what are the don’ts? And if you just follow that framework, it gives you a good feel for how much better of a result you’ll get. And so again, it goes back to governance. It’s when you feed the model a governance framework, you’ll end up getting better results. And try different models, because different models are good at different things, right? You can try the same prompt with different
51:37
and so that’s why AI, I think, is so overwhelming, and it seems like we have a lot of experts in the room, so I’m sure they all have realized that too,
51:46
fantastic. So for those in our audience who are familiar with AI, hopefully they picked up a new a couple of tips around prompting, and then those that are not as familiar or not as comfortable with it, certainly have something to try out this afternoon. I know I’ll be testing some different models as we wrap up. Just would love to get some final thoughts, hanaal, both from you and from Lilly, really appreciate your time today. Hanaal, any final thoughts for our audience? Yeah, I love the discussion before we sign off. I mean, I think maybe treat AI as your co pilot, not a replacement, amplify your strengths of your process, but it needs control. So I think the second thing is anchor every AI initiative with a playbook, and put some work up front into define legislatively, defining that risk profile and having a process around updating it, because that’s something that that’s going to continuously be important, and it’s what’s necessary to achieve the outcomes that you’re looking at.
52:51
I would say third, you know, measure, iterate, refine, so, set, clear, KPIs around cycle times, and, you know, begin with the end in mind, like, what do I want to achieve first? And then what are the pain that what’s the pain point that I want to solve, and what’s the user adoption risk? And then loop that back into what you want to solve with AI. I think to Lily mentioned it earlier, you don’t want to just focus on the solution. Let’s focus on the problem first that we’re seeking to conquer. I’d say finally, partner with vendors who speak healthcare fluently. I mean, AI is ultimately going to make everything more specialized, including AI companies themselves. And this is actually why interacts has been our most successful partner today in terms of real results for our customers in healthcare. So everything’s going to become more verticalized. Lawyers are going to get more specialized. And that also applies to AI and the models that support
53:46
it. Well said. Thank you Hill, and of course, we really love the partnership and our clients and the market, I think is finding a lot of value in impacts and doctors working together. Lily, any final thoughts from you? Yeah, yeah, I think very much in the same vein, right? AI is just a form of automation. There’s a lot of ways that we automate processes, and some of them include AI, and some of them don’t, because AI has pros and cons, and so again, sort of what an all said. You really want to direct your organization to think about the problem that you’re solving and just not pointing to AI as a solution. I’m just thinking about the baby with the laser eyes right now, right and causing chaos. But assuming, yeah,
54:30
assuming you do find something that you love, and in an area where AI is really the best solution for your problem,
54:39
you’re only going to get the best out of that AI, and you’re going to and get sort of the biggest outcome that you could dream of is when you give it that specific instruction, right, the playbook that Hannah was talking about, the context Windows he’s talking about, and the software wrapper to make it so that the users who are interacting with the.
55:00
AI know what’s going on and can trust it, and if not trusted, at least be critical of it and understand how it’s coming and arriving to its answer you don’t want, and you don’t want to spend money on an AI tool where you’re just correcting it all the time, because it’s telling you there are 31 days in April you want something that’s going to be helping you stay Efficient and Be right more than it’s wrong.
55:24
Thank you, Lily, and I know this is particularly the world that you’re living in right now, as we continue to, you know, incorporate and embed AI
55:33
into into the Interact solution. I would just like to thank you both so much for this conversation. It was incredible. I’m so fortunate to have an opportunity to moderate it and and I know the audience really appreciated the discussion
55:47
as we wrap. Would also like to thank the audience for joining us today. If you’d like to learn more about N tracks and Doc juris, please feel free to reach out to n tracks@www.Ntracts.com
56:01
or at info@ntrax.com
56:05
and thank you all again for joining us today.
1_247BBE2CC38AED5464F1BAB7FFB21D75
Thu, Aug 07, 2025 4:08PM • 57:16
00:00
Giles, welcome everyone to today’s webinar. Breakthrough contract bottlenecks with AI and without added risk. Here’s how my name is, Giles, Bruce, Assistant Editor for Becker’s healthcare. On behalf of all of us, thanks for joining us today. Before we begin, I’d like to walk us through a few quick housekeeping instructions. We’ll begin today’s webinar with a discussion, and we’ll have time at the end of the hour for a question and answer session. You could submit any questions you have throughout the webinar by typing them into the Q and A box you see on your screen. Today’s session is being recorded and will be available after the event. You can use the same link you used to log into today’s webinar to access that recording. And if at any time you don’t see your slides moving or have trouble with the audio, please try refreshing your browser. You can also submit any technical questions into the Q and A box. We are here to help. We want to thank Ntracts for sponsoring today’s webinar. N tracks is the leading Contract Lifecycle Management solution built specifically for healthcare organizations with 85 plus years of industry expertise and track street streamlines the healthcare contracting process, reduces compliance and financial risk and supports organizations strategic goals through built in best practices, automated workflows and user friendly reporting tools and tracks is committed to serving our customers compliance needs by continually incorporating built in best practices that stay ahead of the ever changing regulatory and technological landscape. In March 2025 and track expanded its suite of solutions with the acquisition of comply, a trick, a platform for healthcare organizations to manage their regulatory accreditation and administrative compliance needs together. These solutions help healthcare organizations operate more efficiently while staying ahead of evolving regulatory demand. Learn more@Ntracts.com
01:58
Also doctrious is a leading contract negotiation platform that powers contracting for rapidly growing startups and large enterprises, with its headquarters in Houston, Texas, doctors operates globally and across
02:14
every major B to B vertical. With that, I’m pleased to introduce today’s moderator, Stephanie Haywood, Senior Vice President of Sales and client engagement at N track. Stephanie, thanks for being here today. I’ll now turn the floor over to you to get us started. Great. Thank you so much. And welcome everyone this afternoon, we’re so thrilled that you’ve all joined us today. I’m so excited to be sitting here with my friend Hanel. Patel, CEO of Doc juris and Lily Ha, VP of product at Ntracts, I’d like to take a couple of minutes to introduce each of them today. Lily is a vice president of product at Ntracts. Lily brings extensive experience and product leadership, where he’s led the development of several and various FinTech products, including launching embedded software solutions, incrementally improving a mature product, and conceptualizing and finding product market fit for brand new products that target new end users. He’s also built out operational processes and internal and external customer education resources and resources to ensure products scale to meet ongoing demand.
03:29
At intra slowly focuses on solving urgent market needs by building technologies that empower healthcare organizations to manage contracts more efficiently so they can stay focused on delivering high quality patient care. He holds an MBA from Harvard Business School, and is passionate about building inclusive, high performing teams that create impactful tools and strengthen legal and compliance operations in healthcare.
03:55
Now is the founder of CEO of Doc, founder and CEO of Doc, Garris, as mentioned, an enterprise AI company based in Houston, Texas that’s really redefining contract negotiation for mid to large organizations, and since launching doctors in 2018
04:14
he’s grown his team into the leading provider of AI powered redlining and review workflows that accelerate deal cycles while controlling and mitigating risk. Before founding doc, juris hanaul spent nearly a decade as an in house counsel for healthcare and energy companies in Texas, where he led transactional process practices and pioneered early AI applications in legal operations and all Lily, thank you so much both for joining us. I’m really excited about this conversation. Before we get started, I’d love to just spend a couple of minutes polling the audience to really understand a bit more about who’s participating today so that we can really work to tailor.
05:00
For the conversation a bit. So just to start, if you could share what your role is within your organization, and we’ll let this sit for just a couple of seconds while we understand the audience, lots of health systems, if you’re in it or tech, if you’re outside of healthcare as well, please be sure to hear that as well.
05:33
Okay, well, let’s take another look. Looks like we have a lot of in house counsel today, several compliance offices pretty evenly split, and consultants. So looking forward to this discussion, I think it’ll be incredibly impactful as we talk about how AI is being
05:50
integrated and included in your legal and compliance processes. Our second question, we’d love to know what you’re most interested to learn about so how to speed up contact review and negotiations with AI, with your conversations that we have frequently, if you want to learn to reduce risk and improve compliance using automation, which, of course, we’re going to touch on today, how to balance those AI tools with human legal expertise, which I think is is going to be a key topic. And really excited to have him all and Lily dig into that a bit how other healthcare organizations are successfully um, use using AI in contracting, or really, how to get started in implementing AI into your contract lifecycle processes. So we’ll give everyone just another minute
06:42
to respond, and we’ll take a look at those responses.
06:55
Great. So looks like the majority of our individuals and participants want to learn how to speed up contract review and negotiation with AI, followed by how other healthcare organizations are successfully using AI in contracting, and then how to get started implementing it into your CLM process. So we’ll definitely be touching on all of the topics today, including those three. And finally, and I think, a discussion that we have all the time, what is your level of comfort with using AI in a professional setting? Are you very comfortable? Are you using it every day? Somewhat comfortable? But really don’t know the organization’s guidelines. And of course, we’re hearing that throughout the industry, you know, as AI, committees are being formed, are you somewhat comfortable? Do you occasionally use it? Or are you not comfortable at all? And so we’ll give everyone just another second to respond to those. Yeah,
08:03
Kyle, Okay, looks like people are somewhat comfortable and occasionally use it, and others about 21% very comfortable and using it every day. And so excited to have this discussion hear how it can be incorporated into your day to day practices. Thank you all for your response to the poll questions, and let’s jump into our discussion panel. Lily, the contracting process in healthcare is notoriously complex. We can we just start by having each of you share maybe a couple minute overview of some of the biggest pain points that you’re currently seeing in the industry when it comes to contract bottlenecks and compliance challenges and Lily we’ll start with you, if you don’t mind. Sure, I think as we’re talking to the market and our n tracks customers, we keep hearing that people are being asked to do more with less, right? You’re you’re reviewing 30% more contracts, but your team is not growing by 30% I think that’s really struck me is, you know, where, where the complexity and the amount of work and the volume are both increasing. The second area that I think separates health, the healthcare industry, from any other industry is how much M and A activity there is. So aside from everything just being complex on its own, you’re also trying to combine compliance and legal efforts between two different organizations, and those are really unique challenges to the healthcare space.
09:36
Thank you, Lily and all.
09:39
Yeah, thanks, Stephanie and Lily, I think you hit on something critical here around scale. I mean, I’d add that in healthcare, we’re seeing not just more regulations, but a real explosion in volume and stakeholder involvement. I mean, so for example, you have privacy and data use rules layered on top of anti kickback state level patient access mandates. You.
10:00
All of which means legal teams are often swamped with interpreting requirements, let alone embedding them into contracts, right? So you
10:08
know, and unfortunately, though, I think you’ll be surprised at the amount of low value work that’s happening around this complexity. So for example, on average a contracts professional spends about two hours a day copying and pasting and finding word and whenever I tell our clients about the statistic it’s I always get nodding heads like, Yeah, I do that every day. And you know, at the same time to Lily’s point, timelines keep shrinking. Folks expect turnaround in days, if not hours. And as we saw in the poll, over 54% are interested in learning how AI can speed up the review process. But the challenge, though, is that playbooks and frameworks to guide those reviews haven’t kept pace, and so there’s this real mismatch that creates a perfect storm. Contracts bounce between clinical finance, IT security, Q and A back to legal, often by email chains, clunky Word documents, and by the time you surface a key obligation, let’s say a new Breach Notification clause, you already have lost weeks in a crude risk. So I think, in a nutshell, the big bottlenecks we’re seeing are regulatory complexity without corresponding process maturity. And then, from our perspective, you pair that with over reliance on manual redlining tools, sort of like this perfect storm for inefficiency.
11:20
Yeah. Well, well said hanale and Lily. So you know, we’re continuing to hear in the market and from clients that because of the challenges and some of the statistics that you’ve talked about, that they’re really looking to implement AI solutions that can help healthcare organizations do more with less, help mitigate risk, make sure that we’re not getting on down the contracting process when be where at that point we’ve identified that, you know, we’ve got an issue that we probably should have recognized previously.
11:54
And we really also want to enable our teams to help mitigate risk of non compliance as healthcare organizations and legal professionals compliance officers look to move to AI, what are the
12:12
things that they could be incorporating or thinking about when they’re evaluating AI features, specifically when it comes to contact, lifecycle management and and not just the technology, but those processes overall. Lily, perhaps we’ll start with you. Yeah, I think you really want to think about what your goal is. Are you trying to raise efficiency? Are you trying to mitigate risk? Are you trying to do all those things at once and really guide your how you’re evaluating solutions based on those goals, right? Your goal isn’t just to get AI into your organization. Your goal is to actually drive towards an outcome. And so you want to really partner with people who have a strong perspective on how they might be incorporating that AI to get you to that user outcome you’re looking for. And I think Ntracts feels strongly about how we develop AI features and doctors too, and that’s why we’ve been such good partners. But I think, you know, one you really want AI to show its work. Like, think back to grade school, when you would get to a math like a math problem, and you get to an answer, and your teacher is like, show me your work, because then you can sort of follow any of the logic leaps. And as a critical thinker, you can say, Oh, I don’t know if I believe AI in this particular situation. So one is, show your work. Two is, I think you really want to make sure that the AI suggestions, the AI itself, is really integrated with the rest of the product, that makes it more user friendly, that makes it
13:40
easier to incorporate, and not something that you have to think about doing on the side.
13:46
And then third, I think, is really you want to focus you like, we’re all healthcare professionals here. Well, some of us are actually in, you know, operating healthcare organizations, but we’re always trying to think about the healthcare context, because you want to really apply these if your goal is to mitigate risk, you want to apply these tools to the risky areas of your organizations, right? You want somebody who understands that in the software. You need to treat a vendor that has access to phi a little differently than a vendor that doesn’t right that they have different risk profiles,
14:21
absolutely. And I think you hit some you touched on a point, which is really, you know, not only do you need to make sure that you’re making taking these considerations in into account, but also you talked about how we partner. We chose to partner with Doctor jurors because they have a lot of the same thoughts and approach, and so I’m hopeful that you could talk to us a little bit about, you know, Dr Garrison, and how you and your approach and your technology differs from other organizations that they may encounter. Yeah, it’s funny. I’m probably gonna end up saying it back a lot of the things that Lily said. I mean, I couldn’t.
15:00
Agree more. So I may sound repetitive here. We really lean into specialization, rather than broad conversational, AI. So if we get a little bit more specific into transparency, domain expertise and composability, which are, I think, the main themes of what Lily just said, we built our platform around the review process, the contract review process, which means we’re laser focused on that. So if we double click on, for example, transparency, every suggestion that Doc juris makes comes with a clear why in sort of this feature rich editing experience. So you can click into the source clauses from your playbook, review, commentary, see which precedent language was used and why, and trace back to logic. I think conversely, if you look at chatbots like copilot, chatgpt, and lot of I know there’s some folks that are very familiar with AI, although those tools are very exciting and interesting, they can create more risk, because it’s truly garbage in, garbage out. I mean, you’re at the whim of how that particular model was trained, which version of the model, and, quite frankly, the prompt that sits on top of the model in terms of what you want it to do. And so that kind of leads into domain expertise, which for us, our models aren’t just generic NLP tools. They’re really developed with guardrails specific to a customer’s risk framework. And I think you’re gonna hear a lot about that for the rest of the webinar. I mean, it’s, it’s a, it’s a specialized focus that flags the right obligations, not 50 generic risks.
16:29
And I think to touch again on what Lily said, composability. So we know that legal teams often bake doctors into larger CLM products, like in tracks, which is why we work really well together. So we expose our AI outputs by APIs, widgets, checklists that slot right into your existing workflow. So there’s no switching context, and a lot of that’s going to change here in the future too. But the end point here is that you get a guided step by step review process with built in guardrails for risk and, you know, setting aside doctors. I mean, if you if that’s one thing, you take away today, you know, step by step review, guardrails built in for risk management. And you know, this hands on checklist to validate every change before it goes final. I think that’s key. And when you put it all together, and this applies to any sort of application of AI, in our view, speed and velocity are different. I think the one thing people say about AI is, oh, it’s making me faster. I don’t have to think as much well. You know, the fastest way to get a contract done is just to sign it, right? But I think 33% of the people on this call that are the lawyers or, you know, are would not agree to that so. But in that same vein, you know, you also wouldn’t run an AI Redline and just send it out without looking at it. So that’s how our not how our product design. So it’s all about contracting without compromise. So leveraging these tools, leveraging the technology, but doing it well. I think that’s the, I think the big important theme here for today. You both, you both touched on several, several key points there, and one of them really kind of validating the AI outputs. And I think it goes back to, you know, really made a comment about, you know, showing your work. And all, I know, you all take that same, that same approach, when, when you think about how you allow your interaction with with the with the solution and ultimately the incorporation between the workflows. Would love to talk a little bit about some of the most common misconceptions around using AI specifically for contracting in highly regulated industries like healthcare. We’ve also got several compliance officers on on the on the call as well. So in addition to the lawyers and the compliance officers, probably want to hear about, you know, how we can make sure that we are addressing those misconceptions and and continuing to mitigate, mitigate risk. And I’ll start with you,
18:57
yeah, there’s, there’s so much I can say here. You know, there’s been a spectrum right at the early and we’ve been doing this for about seven years, so early, early on, the misconception was, I didn’t know AI could predict the next word, right? It was, there was a misconception around how that works, and now it’s sort of swinging back the other way, where
19:16
I think people treat AI, in some cases, like a legal Oracle. Hey, if you feed it a contract, it’ll spit back the perfect laws. Or, Oh, it’ll just know it if I give it some information. But llms are just pattern predictors. Before it was sort of classic
19:32
tagging of data to create these prediction models based on information that’s tagged. So in a nutshell, these AR models, they don’t know the law. You really have to treat every suggestion as a first draft. So built in audit trails, human review steps, clear checklist so you can validate, override those things that that look off given the overall context of a deal. Is a big, you know, misconception. I think second, there’s this idea that accuracy is all about model size.
20:00
Or endless fine tuning, or I can just fine tune the model and it’ll be fine, but you can train on every healthcare policy in the sun, but the return quickly plateaus. In practice, you get way more mileage from
20:14
just more thoughtful prompt engineering, retrieval, augmented generation against your own clause library, and you know, I think a guided UI that steers through these risk hotspots, and again, garbage in, garbage out, software at the end of the day will be the biggest lift. I think Third, there are the there are some legal tech companies that claim that they have their own model, right? And I think the reality is that most organizations stand out foundational models like GPT, Gemini, anthropic, and there’s a bit of a race to the bottom happening right now, and the competitive edge really lives in the software and the governance that you layer on top, not in forging a new base model. I think that that’s a that’s a big misconception here as well. So bottom line, you know, AI and healthcare contracting is a powerful co pilot, but not a solo pilot. You need the right processes, transparency, domain expertise, to make it deliver on the promise that you’re looking for.
21:09
Yeah, I think, oh yeah, I can jump in there a little bit too. I mean, all made some really great points. I think that this is where we see specificity can get more higher precision leads to higher precision, and that’s why we think healthcare, contacts and and all that stuff really can lead to a better user outcome. Actually have a funny story about this whole garbage in, garbage out, and why you need a human to really double check people’s work. So I was helping a friend try and calculate the interest payments on this loan. It was part of a contractual document, and Chachi PT said, base, you know, you’re going to be paying this amount in interest. Well, you know, how did you get that? Well, you know, given April has 31 days, your interest payment will be this and, well, April obviously does not have 31 days. It is very well documented that April has 30 days, but because of LLM is really just predicting the next word. It might have thought April was a month, and some months have 31 days. And that’s why I brought in sort of that, that wrong, that wrong number. And so things as concrete as how many days are in April can be misinterpreted by AI, and so you really want to think about that kind of application for a clause, right? And that’s where, where I think Dr juris has done really well, is making sure that you’re you have those playbooks and those things to sort of train that model. It doesn’t have to be a lot, it can be small amounts of data, but it’s very tailored to your situation, and really puts those guardrails on that model. Thank you, Lily and helping ensure that you know something that we’re always wanting to make to ensure is mitigating that compliance risk in real time. So
22:57
we know that Oregon healthcare organizations are actively working to stay ahead of evolving compliance regular regulations, and they’re also looking for ways to streamline their processes and incorporate AI and incorporate, you know, Contract Lifecycle Management Solutions, etc. So how can AI support this effort and mitigate that risk, really, in real time?
23:21
Yeah, I think,
23:24
I think a lot of times, people will talk to me about, how can AI help me? And I think that’s kind of the wrong question. You’re really, you’re it’s a solution in search of a problem, almost. I think really the question you’re asking is, what can I help standardize and automate so that I can reduce my risk and make my my my organization, more efficient. And I think big picture, really, compliance comes down to culture, right? What are all those business stakeholders in your organization doing when you’re not breathing down their necks about policies and procedures and your processes, right? What are those choices that they’re making and so, and I think it’s especially hard for the healthcare industry right now because M and A is driving so much of that organizational change. You’re trying to bring in multiple groups of people who may have operated differently together. And so really, you’re looking for tools that could support people and make sure that everyone is sort of operating they might not know what the next right step is, and they’re just guessing. Well, if you bring in tools that have that standardization in it, it can include AI, or it can, or it might not, but that’s really where you’re gonna get the bang for your buck on compliance and risk. And so I like to think about sort of leading indicators of compliance and lagging indicators. So a lagging indicator might be you’re trying to get everything ready for an audit. Well, if you bring AI into that, maybe it can speed up your audit process and getting gathering all that information. But at the end of the day, is it really making your organization less risky? Because all that stuff has already happened, right? You’re just reporting on it. That’s a lagging indicator where maybe you really want to focus.
25:00
Us, Your efforts are on those leading indicators, things like, how many compliance processes do you have? How are they best of best in class. Type of processes are they modeled off of best practices? Can someone who is acts like new to the organization accidentally skip a step? What percent of your contracts are going through those processes? How can you look at statistics? Like, this is more real time, but like, how can you see that? Oh, in April, we actually ended up not being able to do as many contracts this month and we did in March and and why? And like, be able to dig down deeper, right? So if you have AI to help in those type, types of indicators, I think that’s going to lead to a much more successful compliance program.
25:45
Thank you, Lily. So you know, obviously compliance can be achieved through software and solutions, not necessarily requiring AI other In other instances, we’ve seen where AI is incorporated into those both regulatory compliance and organizational compliance and governance processes, while considering the people component or the the human element in combination with that technology solution. And we you’ve both touched on it before, but Hello, I’m wondering if you could just add a little bit more on really, what you’re what you’re seeing, and your approach and in your discussions. Yeah, love the thoughts, so I agree. I mean, compliance is first and foremost about people and culture. I mean, when you, for example, compare contract life cycles to a patient’s care journey, you have handoff, check in, discharge, sign off, everything needs a clear protocol, tight coordination. So AI, by itself, is never going to be a silver bullet for that organizational misalignment. In fact, two birds, one stone, solve the people and culture
26:51
first as you deploy
26:53
AI. So that said, when you legislate against rich risk, meaning you are codifying your guardrails up front. For example, the lily little example, where, hey, by the way, just so that, you know, April has 30 days, right, I think. And when you have a living compliance playbook, AI, can absolutely crush it and become your real time Sentinel. And that’s really where things are going. And at doctors, we think about step one is defining those guardrails and those inputs. I mean,
27:22
one joke that I tell a lot of our prospects, friends and colleagues, is, you know, AI is like a baby shooting lasers out of its eyes. You often need to grab that baby and point in the right direction. So your legal compliance teams need to map out clause by clause what’s acceptable under all the different rules, HIPAA, state privacy laws, Bas, etc, and and that playbook then feeds a review engine, which should continuously scan incoming contracts flag deviations and gaps, and that’s where the promise line is right. But for the 55% that are interested in contract review.
27:55
Now, of course, you still need people in the loop, and that’s why it’s important to bake in regular checkpoints to review what AI is flagging, adjust the thresholds, update the playbook as regulations evolve. So you know, my last point here, I think, is that we should push back on the idea that we can feed some magical AI a bunch of data on a continuous basis, and somehow that leads to the promise line. I think that’s what little and I here agreeing with. I think the problem here is that that this link can also perpetuate mistakes or decisions made under pressure and increase risk. So and in the AI world, there’s this term called AI drift, when you rely on a bunch of inputs and machine learning, and now the model is drifting because maybe we were under pressure.
28:39
And so, in summary, you know governance is key, and legislating the governance will prevent a lot of these concerns that legal compliance teams have with with AI.
28:49
Let’s talk about Thank you. Had all let’s talk about some real world examples of how organizations are successfully using AI powered solutions and individuals to streamline their processes and halal, maybe we can start with you hearing a few examples. Yeah, I can lead with some low hanging fruit, and I’ll describe it in the context of doctors and how we’ve deployed it. But there’s some low tech solutions here that you can deploy on your own as well. So
29:18
number one is what we see a lot is automating sub threshold reviews, and so we have this one Regional Health System use used to waive legal review on purchase orders under 10k but underneath the surface, they’re worried about hidden risks like copy left clauses in their imaging equipment or indemnity
29:39
mismatches, mismatches in their data use agreements, which both present risks that go beyond the purchase price naturally right for the compliance folks in the room. And so by integrating AI powered screening, they can now auto screen those low value contracts against those
29:57
high risk areas. So about 85%
30:00
End up getting a clean bill health and bypass,
30:04
while the remaining 50%
30:08
are so the result is that there’s, you know, they were able to slash risk on contracts that were not being reviewed anyway. So that’s a, I think, low hanging fruit. Another one is standardized term, standardizing terms. So going back to governance, I mean, another healthcare system was spending, I think, days manually hunting for deviations and vendor MSAs, particularly around warranty clauses and liability caps. So the real world application here is using
30:35
AI like Dr is to flag non standard warranty carve outs liability limits. So they went from this four day average negotiation cycle down to 24 hours, and then meanwhile, embracing and empowering other
30:49
departments to look for those things on the front lines and then moderating the review. So having a process there, so at scale, this really is valuable, because unfortunately, Microsoft Word is where a lot of data goes to die. So even if you have great processes on the front end, it’s hard to roll that information up to a dashboard. And you know, where is it that we are always fighting in our agreement? So not only leads to good front end review, but management can look at the bigger picture and strategize around the types of risks that are the most important. And I think the final one is just reclaiming bandwidth. So we’re working with this mid size pharma company. Attorneys are previously logging eight hour reviews on every MSA, so our guided checklist on both the incoming red lines for their template, as well as third party paper, they’re able to knock out that first pass review in about 90 minutes. So we talked a little bit about velocity, but the real benefit here is that the extra capacity was able to allow their legal team to focus on more strategic projects versus the drudgery of staring at fine print all day. So there’s a lot more that legal teams can bring to the table in terms of strategy and being a partner to the business. And I think that’s a pretty big low key benefit that AI can can bring to many organizations.
32:05
Thank you, Lily. If you want to add, yeah, I think your last example, like, we hear people say they want to work to the top of their licenses. They don’t want to be stuck in the mud, right? And they want to really be thinking about the heart and untangling the hard. And then the other piece like to really speeding up some of the contract reviews is we talked with a customer who was like, we are signing contracts for band aids and laser beams. And you know what? I don’t really care about those Band Aid contracts. So those we might be able to get through with a doctor as but, but the laser beam contract, I really want to be spending more of my time on and, you know, thinking about that 30% increase in contracts like maybe the those, the 30% that are on band aids and gauze and that kind of stuff, maybe doesn’t need as much attention. So
32:58
on the post signature side, I think really where people have coalesced is around this, like aI assisted contract abstraction. So this is the idea of, when you’re putting things into a repository, you really want to pull out important features that are important data points in the contract so that you can find it later. You can do things about that, that contract later. And so I think initially, when we talk to
33:26
people who are using this feature, they think that this is going to save them time, and it is, but it’s not really. There’s one instance in which it does save time, right? We have this lady who their organization just underwent a merger, and she has this huge folder of contracts that she needs to load into the into our system, and it’s never been a high priority on her list, but now that she has sort of this AI assisted contract abstraction, like, whenever she’s got 10 minutes, she can just bam, bam, bam, get through her to do list a little bit faster,
33:59
but really, where the ROI comes in on this is reducing risk, right? Making sure that if you’re not the only one putting contracts into your contract repository, that that these other people, like spread across the entire organization, are actually putting in those data fields that might kick off another action, right? And and they might not be there in a year from now, but you still are going to need to manage the life cycle of that contract going forward. So things like, are you sure that you didn’t misread whether or not this this vendor has access to phi? Oh, because if you answer that incorrectly, you could be missing a baa, and that puts you at risk for fines and the penalties, as we all know, we talked to other customers who say, You know what, I wasn’t thinking one day, and I accidentally, you know, fat fingered something in the AI caught that I actually might have put in the wrong auto renewal date, right? And that is just ROI in and of itself, right? That pays for the entire.
35:00
Air tool.
35:02
Thanks, Lily, both you and hanale shared some really fantastic examples, and I think we could spend a lot of time with other talking about other real world examples
35:15
for those organizations that are looking ahead, maybe they haven’t already adopted AI into their CLM processes, or they’re they’re on the journey, and they’re on the path, but they’d like to continue that. What are some of the steps that you would recommend to really ensure a smooth transition? And Lily, if you don’t mind, we’ll start with you. Yeah. So I think first is you want to identify the features and solutions that are going to help you catch that risk, right? Imagine yourself like I’m talking to all the compliance and legal people on the call. Imagine yourself at your worst, like you’ve gotten norovirus, you’ve been up all night. What areas of your day to day do you want to have extra backup? Right? That’s really where AI is going to help you. And I’m sure, as you’re thinking about it like a majority of those are pretty well defined use cases, and they’re going to be fairly specific to health care, like if you have other attorneys that that, or attorney friends, they might not also be thinking about the same things that you’re thinking about. And so therefore, your chance of getting a precision from AI can actually increase. I think you also want to be thinking about how user friendly a tool is, because that will help you with the rollout and adoption. I know a lot of people here are very comfortable with AI, but I think as you try, if you want more and more people to use that AI tool, you want to make sure that’s not scary and intimidating to them. Because sometimes we, we
36:45
those, those like, Flash beta and they, they flash things that make you unsure if, if what you’re doing is going to be permanent and or, like, mess up something downstream that you’re not aware of. Um, second, I think, is just really be skeptical and know that there is some level of interpretation here, right? You’re asking a tool to weigh in on something that you might have been well trained to do, like you have experience, you understand nuance, and anyone that’s using this tool may not have had that those experiences, but they’re still responsible for that output, right? That’s the culture of compliance. Everyone is responsible for compliance, right? And that’s that’s picking tools that will help show work, show its thinking, its logic, to the people who are using the tool. And you know, you really want to be thinking about, is it enhancing guardrails that you already have in place and not confusing the user because it’s, like, overly complicated or not helping there.
37:46
It’s fantastic. Thank you, Lily and all, did you have anything that you’d like to add?
37:53
Yeah, I think when it comes to,
37:57
you know, one of the things that we like to think about when designing this type of an implementation involving AI, you know, at the end of the day it, it often comes up, comes down to how well you’re able to
38:11
rally the people around the process. And, you know, I know that’s a kind of a hand wavy thing to say, but it becomes really important to design up front and ask the hard questions around,
38:23
where’s the for example, where’s the volume? So one thing that is highly beneficial for us when we’re thinking about implications is, you know, we don’t have to conquer every problem up front. Let’s try to chip away at this in sort of an iterative fashion, and let’s not bite off more than we can chew, as you say. So I mean, I think that would be one thing I would, I would add to that. Thank you for knowledge completely aligned there. And I know we have that same approach as we as we’ve talked about before. I want to leave some time for Q and A but I’d like to go ahead and start thinking about looking ahead. How do you see AI evolving, particularly in CLM, and what should organizations be thinking about and preparing for within the next couple of years? Lily, we’ll start with you. Yeah, I think we talked a lot about contract data abstraction already, and sort of the value in managing the future
39:22
life cycle of a contract. Other areas that you know, there are a couple use cases that have been bubbling up to the surface. So we hear a lot about contract summaries, and I think where we really see success in this is when people are using this to give an overview to an exact like somebody who’s going to use that information at an arm’s length, maybe they’re part of the signature signature process, and just want a quick here is what this contract is about. But as everyone on this call knows, you know the value is really in the details and in the nuance. So you have to decide whether or not that’s really beneficial.
40:00
Example to your organization.
40:02
The second area that we hear a lot about, actually, is self servicing on questions on specific contracts, right? So this is really a chat bot that can help you translate clauses and not you, but business stakeholders like helping business stakeholders answer questions about individual contracts. And I think what you got to think about here is, do you want your organization to get advice or interpretation from an AI model, like, what are the actions they’re going to take from those answers? And will those put you at risk or not sometimes? And you will have a good gage of that based on, are you getting pinged with really simple questions. Do you have people in your organization that sort of fly, you know, go off the cuff and just take actions without talking to you first? And depending on sort of the culture, I could see this very much helping people so that you’re not getting hit with all of these interpretation questions you can only you can mainly focus on the more sophisticated questions and save a little bit of time that way. If that makes sense. Third is, we really see a lot about clause searching here. So you’re not searching for a single contract. You’re not choosing upfront, what you might search on later. This is tariffs just went up. And now I want to go pull out all the tariff pricing from all of all the contractive contracts we have today.
41:29
This is really about trying to pull a big list that you can take action on this, like, body of contracts going forward. And it’s pretty ad hoc. This, I think, is a little bit more on the on the edge of, like, feasibility, I think so we’re definitely excited. This is a really cool problem, and I think high impacts to an organization.
41:52
Thank you. Lily hindal, as
41:55
the founder and CEO of dot Garrett, he certainly saw the future trends several years ago when you started the company and talk to us about what future trends you continue to see as we as we move forward.
42:08
Yeah. I mean, if I go back to the baby shooting lasers out of its eyes, I mean, just on a very basic level, that baby is going to grow up, right? And I think just to give you some a really interesting data point that everyone can sort of think about, chew on over lunch is, you know, when chat GPT three came out, or two, even the early stages of
42:30
the model, there’s only so much context that you could put into the into the, you know, into the chat, and there was only so much response you get back. You know, there’s tokens in, tokens out. You know, some of the most recent models that have come out this week, like, actually, literally, last week,
42:47
you can put in over a million tokens of context, which is the equivalent of 20 novels. So whether it’s going to do it well and accurate, will will be just a matter of time. You know, I think that the freight train has already left. I mean, this is going to be the future. The models are going to get better. They’re going to get more accurate. So I think, just foundationally, I think that that’s everyone should just take that away, is that these models are just going to get better. There’s no question about that. We’re seeing it daily in our in our own research since we started the company in 2018
43:25
but if we, if we get a little bit more specific, I think first embedded task orchestration. So today, you can ask a bot to write you an email to to give a summary, or, Hey, tell me what the tariff risks are in these 14 contracts. Tomorrow, it’ll be cool. Now, what would you like me to do with that summary? Do you want me to draft a Word document? Do you want me to spin up a spreadsheet? Do you want me to assess your balance sheet and highlight certain rows where there’s risk and how to comment update CLM fields, kick off an E signature workflow that’s moderated all without you jumping between apps. So there’s this sort of multi modal or called
44:02
Task orchestra. Orchestration, that will be the next use case. And a lot of that’s just software, by the way. But as the models get better, and as software companies like ours start to embrace it more, you’re going to see more application. And
44:16
so it’s software around these models. And as these models get better, the software is also going to grow. Second is, I think, contextual awareness. So AI will be able to pull in data from
44:28
EHR billing systems, real time analytics. So
44:32
for the lawyers in the room, you know, I still believe that today, AI is not here to replace advocacy, negotiation and context, but I think context is probably the first domino default in the near future, so there’s already early evidence of this being the next big thing in enterprise applications. So
44:48
instead of a standalone clause markup and analysis, you can get insights like, hey, this pricing amendment you just drafted aligns with your top revenue services line, or
44:58
this liability cap May.
45:00
Suppose you based on that recent merger, speaking of M and A, so things like this are going to be truly game changing. And are happening now with our customers is just collapsing that, that context gap that we have because of how involved we are in different aspects of the business, I think third is just autonomous governance loops. I mean, once you’re able to define your compliance playbook, AI will continually monitor live contracts and fly deviations in real time, spin up tickets, send notifications. And so there’s going to be money. It’ll be much more agentic. And so even suggesting update clauses for legal to review before they go into production. This is some of the things that we see and some of the more crazy things that we think and anticipate in the next decade or two, is agents negotiating against each other in a contract review without people in the room. So I mean, these things are going to come. It’s just a matter of time, and we just have to be prepared. So I think today, if you’re a healthcare organization, you know, first start by unifying clause libraries and templates, standardizing your appetite for risk, mapping out the end to end process, so when these smarter, task driven, context rich agents arrive, which we feel strongly is inevitable, you’ll have the plumbing in place to execute, and people will remain An important part of this puzzle. It’s just our skill set has to evolve, right? It’s the practice of law has to evolve toward this. And we saw this with typewriters to Word Perfect to Microsoft Word. And now this is the next this is definitely the next leap.
46:35
Thank you, Hannah, all we I want to ask just a couple of questions that we we have you’ve both talked about the people and incorporating people into the process, and have both talked about AI really can’t be blindly trusted, and that validation is key. And both of our companies take that that same methodology and approach, if you could each just touch and maybe we’ll start Lily with you on what specific guardrails or review processes. Have you seen work best in in healthcare, with healthcare legal teams and compliance teams to safely operationalize AI,
47:12
yeah, I think it’s really where you’re already doing the same process. You just maybe have something filled out pre filled out by AI, right? So you’re reacting to something instead of having to generate that response yourself. And I don’t know about you, but it’s always easier for me to correct something than it is for me to come up with something to begin with. And so I think that is, you know, you’re, you’re, hopefully your team is already bought into a tool, and it’s really just trying to use that same tool, but in a sort of faster manner.
47:49
Thank you, Lily and all a question for you. You talked about prompting AI effectively to get better results. You talked about, you know, the future state of AI.
48:03
How can, or do you have examples of how legal teams can structure prompts to ensure that as they’re starting to think about usage of AI, there’s consistent, reliable outputs in particularly when they’re reviewing contracts? Yeah, it’s definitely an art. It reminds me of, I don’t feel remember when Google came out, right? There are the haves and haves nots, right? The apps are those that could tap into Google and they had that magic wand, right? And I think that’s prompt engineering is actually much more complex than a Google search. But, you know, I think let me first start with
48:38
some some wrong assumptions. So some people assume, and I think this goes back to misconceptions that, oh, I can just feed this thing a bunch of data and then it’s going to produce a result. For me, I think the there is a very important component of prompt engineering that everyone
48:58
that’s in the room should learn. And there’s a method to prompt engineering, and there’s so many articles and best practices, but one thing to think about is chain of thought. So
49:11
it goes back to the April example, right? So the more of a nudge you give these AI models, the better nudge it’s going to give you back. So
49:21
and so when we think about that practically, when you’re building your playbooks, you should think about playbooks as an amalgamation of the criteria that you’re identifying in a contract. So let’s say I’m looking for governing law. What are the outcomes that I’m anticipating if it’s found or not found, or Delaware versus New York, versus the state in which your healthcare organization is, and then how do we respond to that? So you have to kind of think about it as an expert system. And so it’s these expert systems embedded in these prompts that really, I think, makes it more consistent and reliable.
49:55
The problem is, is when you assume that very generic, unstructured text, because.
50:00
Is going to result in better results, or more of it, more is not always better with these models. So the more direction that you provide an AI model up front, I think ultimately you’re going to get better results. We have a client that came to us with
50:17
a mountain of historically signed agreements, and they said, Can we use this to train the your AI to automatically mark up third party SaaS agreements? And we said, sure. But you know that playbook that you have on the side that your legal team wrote is actually more effective for the model to interpret and do it well, because the model, the prompt engineering in the playbook serves as the rules of the road, and then the historical documents are how the rubber meets the road. So it’s that one two punch that becomes really important. But, yeah, I think prompt engineering is an important skill set, but it’s also, you know, just being really specific in terms of your your needs. I mean, one, one simple trick is, what are your goals? So you can try this today and copilot, what’s your goal? What is the output that you expect, and what are some warnings that you want to advise the AI model to think about, and then what are the don’ts? And if you just follow that framework, it gives you a good feel for how much better of a result you’ll get. And so again, it goes back to governance. It’s when you feed the model a governance framework, you’ll end up getting better results. And try different models, because different models are good at different things, right? You can try the same prompt with different
51:37
and so that’s why AI, I think, is so overwhelming, and it seems like we have a lot of experts in the room, so I’m sure they all have realized that too,
51:46
fantastic. So for those in our audience who are familiar with AI, hopefully they picked up a new a couple of tips around prompting, and then those that are not as familiar or not as comfortable with it, certainly have something to try out this afternoon. I know I’ll be testing some different models as we wrap up. Just would love to get some final thoughts, hanaal, both from you and from Lilly, really appreciate your time today. Hanaal, any final thoughts for our audience? Yeah, I love the discussion before we sign off. I mean, I think maybe treat AI as your co pilot, not a replacement, amplify your strengths of your process, but it needs control. So I think the second thing is anchor every AI initiative with a playbook, and put some work up front into define legislatively, defining that risk profile and having a process around updating it, because that’s something that that’s going to continuously be important, and it’s what’s necessary to achieve the outcomes that you’re looking at.
52:51
I would say third, you know, measure, iterate, refine, so, set, clear, KPIs around cycle times, and, you know, begin with the end in mind, like, what do I want to achieve first? And then what are the pain that what’s the pain point that I want to solve, and what’s the user adoption risk? And then loop that back into what you want to solve with AI. I think to Lily mentioned it earlier, you don’t want to just focus on the solution. Let’s focus on the problem first that we’re seeking to conquer. I’d say finally, partner with vendors who speak healthcare fluently. I mean, AI is ultimately going to make everything more specialized, including AI companies themselves. And this is actually why interacts has been our most successful partner today in terms of real results for our customers in healthcare. So everything’s going to become more verticalized. Lawyers are going to get more specialized. And that also applies to AI and the models that support
53:46
it. Well said. Thank you Hill, and of course, we really love the partnership and our clients and the market, I think is finding a lot of value in impacts and doctors working together. Lily, any final thoughts from you? Yeah, yeah, I think very much in the same vein, right? AI is just a form of automation. There’s a lot of ways that we automate processes, and some of them include AI, and some of them don’t, because AI has pros and cons, and so again, sort of what an all said. You really want to direct your organization to think about the problem that you’re solving and just not pointing to AI as a solution. I’m just thinking about the baby with the laser eyes right now, right and causing chaos. But assuming, yeah,
54:30
assuming you do find something that you love, and in an area where AI is really the best solution for your problem,
54:39
you’re only going to get the best out of that AI, and you’re going to and get sort of the biggest outcome that you could dream of is when you give it that specific instruction, right, the playbook that Hannah was talking about, the context Windows he’s talking about, and the software wrapper to make it so that the users who are interacting with the.
55:00
AI know what’s going on and can trust it, and if not trusted, at least be critical of it and understand how it’s coming and arriving to its answer you don’t want, and you don’t want to spend money on an AI tool where you’re just correcting it all the time, because it’s telling you there are 31 days in April you want something that’s going to be helping you stay Efficient and Be right more than it’s wrong.
55:24
Thank you, Lily, and I know this is particularly the world that you’re living in right now, as we continue to, you know, incorporate and embed AI
55:33
into into the Interact solution. I would just like to thank you both so much for this conversation. It was incredible. I’m so fortunate to have an opportunity to moderate it and and I know the audience really appreciated the discussion
55:47
as we wrap. Would also like to thank the audience for joining us today. If you’d like to learn more about N tracks and Doc juris, please feel free to reach out to n tracks@www.Ntracts.com
56:01
or at info@Ntracts.com
56:05
and thank you all again for joining us today.
56:09
Thanks, excellent, and that’s all the time we have for today. I want to thank Stephanie, Henal and Lily for an excellent panel, and N tracks for sponsoring today’s webinar. To learn more about the content presented today. Please check out the resources section on your webinar Council and fill out the post webinar survey. Thank you for joining us today. We Hope you have a wonderful afternoon applause.
56:09
Thanks, excellent, and that’s all the time we have for today. I want to thank Stephanie, hinaul and Lily for an excellent panel, and N tracks for sponsoring today’s webinar. To learn more about the content presented today. Please check out the resources section on your webinar Council and fill out the post webinar survey. Thank you for joining us today. We Hope you have a wonderful afternoon applause.
Speakers:

Lily He,
Vice President of Product,
Ntracts

Henal Patel,
Chief Executive Officer,
DocJuris

Stephanie Haywood (Moderator),
Senior Vice President of
Sales & Client Engagement, Ntracts
