Artificial Intelligence in Healthcare

 

Artificial Intelligence in Healthcare

The applications of artificial intelligence (“AI”) technology are numerous, and many industries, like the healthcare industry, have benefited from AI technological advances. From deep learning algorithms that can predict and diagnose diseases, to natural language processing that can process voluminous amounts of unstructured health records and drug safety data and render such information into a comprehensible form, the potentials for AI in healthcare seem endless. However, every coin has two sides and AI is no exception.

Data Privacy and Security

AI technology is increasingly utilized in the healthcare sector to process massive datasets. Such datasets will inevitably include personal health information, which is generally regarded as sensitive personal information requiring a higher degree of protection. Fortunately, general public awareness of privacy and data protection issues is increasing. As the use of personal information becomes more regulated worldwide, stakeholders must give careful thought to their data handling and security practices, lest they incur the financial and reputational repercussions of a data breach.

Liability

When a doctor makes a mistake, the doctor may be subject to medical malpractice lawsuits; however, when an equipment powered by AI makes a mistake, who should be held accountable?

Most applications of AI in the healthcare industry today require input from physicians who serve as the main drivers of decision making. As AI continues to evolve and becomes more independent in performing tasks, or as physicians become more reliant on the recommendations made by AI, there will be increased legal scrutiny on physicians and healthcare institutions that incorporate such AI technologies in their practices. Where there is use of AI, there is potential for liability.

In addition, as AI technologies continue to advance, it is becoming increasingly difficult for humans to dissect the decision-making processes of these technologies. This may be particularly true for AI technologies based on deep learning, where the black box nature of such AI processes makes it very difficult to determine the rationale behind a decision. In such instances, it may be difficult to assign responsibility to an individual when something goes wrong.

The Gray Zone

There is currently limited legal precedent regarding liability involving AI. Should a physician and/or healthcare institution be liable for errors or malfunctions of AI technology, or should the developer and/or vendor of the AI technology be responsible for the errors made by the technology?

Privacy regulators in Canada recognize the benefits of AI as well as the liability and privacy challenges that AI adoption raises. Several projects funded by Canadian privacy regulators such as “Privacy and Artificial Intelligence: Protecting Health Information in a New Era,” “Artificial Intelligence, Machine Learning and Privacy: From Threats to Solutions,” and “Deep Learning in Medical Imaging: Risks to Patient Privacy and Possible Solutions,” have been completed, and more will likely be underway to help regulators and legislators understand how best to address the liability and privacy risks arising from the application of AI in healthcare and other industries.

As AI continues to develop and as its applications in various industries expand, novel legal questions will arise. To maximize the benefits of AI technologies, it will be important for industry leaders and regulators to continually monitor the development and the impact of AI technologies in their industries, identify the pros and cons that arise from AI technology adoption, and provide guidance to other industry professionals and the public regarding appropriate uses and applications of AI.