- Market HighLiGHTs
- Posts
- If AI Makes a Mistake, Who Is Liable?
If AI Makes a Mistake, Who Is Liable?
Legal Implications of AI in Healthcare
With everything everywhere AI, it’s easy to get into the idea that an AI helper will correct all of our mistakes and improve the way we do..everything. But that’s not the case (yet, anyway). We may be in an AI boom, but we have a lot of work to do before it makes its way into our regular lives in a meaningful, trustworthy way.
Here are some thoughts on the topic.
“To maximize its benefits and mitigate risks, the policies and regulations governing its use need to evolve to ensure safety, efficacy, and fairness in AI-powered medical care.”
Public Market Update: Average Sector Performance
Let’s first take a look at the HealthTech market.
April was not a happy time for HealthTech companies as many ended the month in the red. The Change Healthcare attack, for one, has consequences that the industry is still grappling with. The geo-political conflicts around the world may be having an effect in how folks are handling their more diversified investments, and of course inflation doesn’t help.
Yet, regardless of market movements, all news on the HealthTech front is not bad. GLP-1 agonists are still making headlines, with the most recent news stating that Him & Hers will be offering a compounded version of Ozempic. Of course, the long-term future of this is not certain, but in the short term, we are seeing some surge pricing.
Going forward, other companies may also capitalize on GLP-1 in different ways, and of course AI is not to be forgotten. Its presence is here to stay and its impact on healthcare is as yet to be seen.
In the mean time, remember the old adage of “buy low, sell high”!
Learn more about each sector by clicking here: |
Articles Worth Reading
Legal Implications of AI in Healthcare
AI is everyone’s favorite buzzword now, but AI applications in healthcare are not to be taken lightly and bring about several questions & points of consideration.
First, it’s clear that having AI as an add-on, or assistant, is much more likely the scenario we’ll see going forward (vs replacing professionals altogether). But the question that arises is: when a mistake is made, who (or what) is liable?
Another point: if AI-based technology becomes the standard of care for a particular problem, then what checks & balances are required to ensure that the appropriate decisions are being made?
Liability for 👨⚕️ Doctors vs 🤖 Treatment Device companies that use AI vs 🏥 Hospitals will vary depending on how and where in the patient care sequence the technology is being used, and the type of error that has occurred.
AI is not perfect and, in fact, it likes to hallucinate. In other words, depending on the quality of the inputs, AI may generate incorrect outputs.
Regulations for AI are necessary and the existing big industry players have already begun to release statements:
The World Health Organization - proposed over 40 recommendations regarding the ethics & governance of AI models.
FDA - is working to shape the regulatory landscape around the use of AI in medical devices.
American Medical Association - established policy recommendations over how to incorporate AI into healthcare settings.
For further information on the legal implications of AI in healthcare, read the full article below!
Opinion
A Doctor’s Perspective
In an era when even Walmart can’t fix healthcare (and gave up on it), it may be hard to understand how healthcare can be fixed at all.
Our problems are paramount and multi-faceted. We are terrible at insurance coverage, billing, reimbursement, access to care, unnecessary testing, horrible at nutrition counseling and management, an abomination at primary and preventative healthcare compared to other developed countries…and the list goes on.
One player cannot fix our problems, let alone a corporation that doesn’t understand that patients are not objects that you can send through an assembly line and repair.
AI as an add on makes sense to enhance patient care, and help out our stressed out, overworked, underpaid healthcare workers and busting at the seams strained hospital systems.
Yet, it's also not a one time fix, nor is it without problems. Regardless of the promise of AI, it is an area we need to tread carefully. It’s a matter of time before patients come to physicians with a printout of what Chat-GPT told them their diagnosis is. If we aren’t careful, and if we don’t educate the public appropriately, we will just create another avenue of unnecessary testing, patient anxiety and potential harm.
We need to alleviate our issues, not create more.
—Sanjana Vig MD,MBA (Anesthesiologist, Peri-operative Expert)