Five questions to ask before signing that AI contract

8 minute read


Ask them early and ask them often to avoid unforeseen consequences.


Signing a contract with a software vendor to implement an AI innovation in a medical practice is like starting a committed relationship – ask the tough questions early, and keep asking them through the life of the relationship to avoid unexpected, potentially catastrophic consequences.

That’s the advice to practice managers and clinicians from Karen Lee, regional lead counsel at medical access advocacy group Clinigen.

Speaking at the Australasian Institute of Digital Health’s AI.Care conference in Brisbane earlier this week, Ms Lee provided five questions to ask before signing with an AI provider, including those offering AI scribes.

“Bring these questions forward. Ask them early,” she said.

“The goal isn’t just to deploy AI. It’s to use it responsibly and make sure that your patients can trust it.”

Ms Lee said the vibe around AI implementation in healthcare was “incredibly exciting” but promising projects could fall victim to predictable pitfalls.

“[This happens] not because they’re bad ideas, not because the technology doesn’t work, but because of issues that haven’t been spotted early enough,” she said.

“I want to show you the quiet parts of AI implementation. The parts that we get right together can really unlock tremendous value of your organisations.

“And if we get them wrong, unfortunately, these things can bring even the best ideas to a grinding halt, or cause your projects to be delayed.

“All of these items are predictable. They’re not bad luck. They’re early warning signs for your project, and what they could lead to are expensive redesigns of your project because of your risk not being caught early enough.”

Data rights

Data is the “beating heart” of every AI-driven system, Ms Lee pointed out. More specifically, de-identified data is crucial to the trust patients may have in the AI and the clinicians employing it.

“There is an assumption that the AI is using de-identified data,” she said.

“But de-identification might mean different things to different vendors or to different parts of your organisation.

“Secondary use of the data might be restricted in ways that we didn’t anticipate, and of course, we need to comply with Privacy Act obligations and our health privacy principles that apply.”

Questions to ask:

  1. What specific data fields will the tool access or generate?
  2. Is the de-identified data enough for this use case, or do we need consent or another lawful basis?
  3. Can the vendor use our data to train, fine-tune or improve models for other customers?
  4. In the contract do we define and limit data use, storage, retention, deletion and secondary use?
  5. Where will the data be stored and processed – in which countries?
  6. Should we run a privacy impact assessment?
  7. Can the vendor delete all copies of our data if we stop using the product?
  8. How will we explain our data use to patients, healthcare professionals or the public?

“If you get vague answers here, this could turn into big surprises for your organisation later,” said Ms Lee.

“Think about whether your vendor can delete all copies of your data if you decide to walk away.

“This is really important, because many vendors might not be able to deploy so if your data is still sitting within a vendor system, be really mindful, because you might still be accountable for it.

“Most importantly, how you can explain your data use to your patients and clinicians?

“Transparency builds trust for your organisation, and silence would destroy it really quickly. Trust is what makes or breaks AI in healthcare.”

Intellectual property rights

Once an AI intervention is deployed practices and clinicians can make tweaks to improve the product depending on a practice’s individual workflow and patient demographics profile.

Who owns the changes made to the vendor’s product?

“It’s like when any relationship ends,” said Ms Lee. “When you break up, who gets to keep the dog – in this case the ‘dog’ is the data.

“Can we take our data with us? Can we take the retrained model with us?

“This is really important, because if the vendor’s model learns from your data, your workflows, your patient population, then that improvement has strategic value, both for the vendor and for your organisation.

“If the vendor owns those improvements, then your insights could become their product in the future, your competitors might benefit from the work that you paid for and did throughout your project. So, this question is important to protect your upfront investment.”

Questions to ask:

  1. Who owns any improvements or retrained models we create?
  2. If we stop using the vendor, what rights do we keep on this tool, improvements or the outputs?
  3. Can we export our model outputs or logs in a usable format when the contract ends?
  4. Does the vendor have the right to reuse, resell or build on the work we co-develop with them?
  5. Are the outputs of the AI tool considered our IP or the vendor’s?
  6. What proprietary components of the vendor’s system do we become dependent on? What fallback options exist?
  7. Is the licence perpetual, term-based, subscription-based, or usage-based? How does that affect long-term control and cost?

Auditability and documentation

Vendors are known to say things like “we’ll give you this information after you’ve signed the contract”, said Ms Lee.

“Ask what documentation is available today, not what will become available later,” she said.

“You want to do these assessments upfront, before you’re locked in with a particular vendor. Things like clinical governance, safety reviews, risk assessments you want to do early on in your process, thinking about whether you’ve got validation results or performance evidence.

“You want to have this information to check accuracy on the relevant populations for your project, error rates and performance across diverse cohorts, and also limitations that might be inherent in the model.”

Questions to ask:

  1. What documentation do you provide that we can rely on for audit, safety and regulatory review?
  2. Do you provide evidence of performance, such as technical sheets or validation results?
  3. What logs or audit trails are generated? How long are they kept for?
  4. How would we investigate an incorrect or harmful output?

Classifying the tool correctly

There are a series of misconceptions it is easy to fall for when being sold an AI product, Ms Lee said.

It’s not a medical device because it uses AI.

“That’s not true, because it’s actually about what the tool does,” said Ms Lee. “We really need to care about whether or not is should be classified as a Software as a Medical Device.”

It’s just workflow assistance, so it can’t be regulated.

“Calling something just a workflow tool doesn’t change its actual impact. It’s really thinking about the effect of your tool, not how you brand it.”

If the clinician makes the final call, then it’s not regulated.

“Again, this provides a false sense of security for us. Even if there’s a human in the loop, that doesn’t mean that regulatory oversight is [missing].”

It’s not a medical device because it doesn’t diagnose anything.

“That’s not true, because we need to think about tools. In addition to tools that diagnose, we also need to think about tools that do other aspects [of healthcare] as well that are regulated by the TGA.”

We already have similar tools in Excel – so the AI version is the same.

“Again, this isn’t true because AI could bring capacity. You can’t see exactly how it reaches conclusions, because the outputs can drift over time. And it’s also dependent on your data quality as well.”

If it’s de-identified data, then TGA regulation doesn’t apply.

“This isn’t true, because the TGA regulates tools that are based on clinical impact, not only whether the data is identifiable. We need to make sure that even if your tool operates on de-identified data, it could still be classified as Software as a Medical Device.”

If the vendor is compliant overseas, they’re compliant in Australia.

“This is risky, because each jurisdiction has its own risk classifications, documentation, expectations and post monitoring requirements. Sometimes you take comfort in the fact that the tool is regulated in different jurisdictions, but for us in Australia, we need to think about what applies for us locally.”

Over-reliance on vendor assurances

If a vendor says it is fully compliant some important questions need to be asked to prosecute that claim, Ms Lee said.

We’re fully compliant.

  1. Which standards, regulations or laws are you compliant with?
  2. How do you monitor ongoing compliance as your model evolves?

We follow industry best practice.

  1. Can you show how those practices are applied to your AI model?
  2. How do you test alignment with best practices over time?

Our model is too complex to explain.

  1. Explainability is required for clinical governance – what level of explanation can you provide?
  2. What is your approach to supporting human oversight and accountability?

Global terms apply.

  1. Are your global terms drafted for the Australian healthcare environment?
  2. Can we discuss variations required for Australian risk allocation?

“If you ask these questions early, hopefully you will avoid a lot of [problems],” said Ms Lee.

“AI isn’t just a tool.

“It’s really a relationship with data, with your people, with risk and with responsibility, and like any relationship, if we have the honest conversations, the stronger the relationship becomes.”

The Australasian Institute of Digital Health’s AI.Care conference was held in Brisbane on Monday 24 November and Tuesday 25 November.

End of content

No more pages to load

Log In Register ×