Checking in with Leo Dias: Safe AI Adoption

Leo Dias
Article header image, featuring the title Checking In With Leo Dias: Safe AI Adoption
Secure, Private, and Accountable: Safe AI Adoption in Canadian Healthcare

Consider these three points:

  1. The Canadian health sector is facing an unprecedented staffing shortage, with record levels of job vacancies, high workloads, overtime, and burnout.¹
     
  2. Nurses and physicians spend more than 25% of their time on documentation, particularly in electronic health record systems.²
     
  3. Recent breakthroughs in artificial intelligence (AI) have created opportunities to significantly boost productivity.³
     

It’s not often we see problems and opportunities of this scale emerging together. But achieving the potential benefits of AI in healthcare requires careful consideration of the novel risks posed by these technologies. This post focuses on three such risks.

Security

While healthcare leaders cautiously explore the possible applications of AI in their organizations, malicious actors are racing ahead in exploiting these technologies. Attackers are leveraging generative AI and large language models to orchestrate attacks at previously unseen levels of complexity and speed, finding new ways to bypass security controls, launching highly sophisticated impersonation attacks, and poisoning AI training data through prompt injections. AI is expected to increase the volume and impact of cyber attacks over the next two years.⁴

Privacy

Privacy concerns are at the heart of AI adoption in healthcare. The use of AI often entails processing vast amounts of personal health information on third-party computing

infrastructure, such as cloud technologies. This raises significant privacy concerns regarding the inadvertent exposure of sensitive, regulated, and confidential information. In response, organizations are increasingly focusing on the principles of privacy by design⁵ in the development and deployment of AI solutions. This approach incorporates privacy considerations at the outset of AI development, ensuring that patient data is protected.

Liability

The integration of AI into healthcare raises complex questions about liability, particularly in cases where AI-supported decisions may lead to adverse outcomes. Canadian⁶ and global⁷ authorities are working towards establishing legal frameworks for AI, to better define the responsibilities of AI developers, healthcare providers, and other stakeholders, to ensure such systems are not only safe and effective, but also ethically responsible. Meanwhile, some organizations are learning difficult lessons, such as a Canadian airline recently required by a court ruling to honour a refund policy invented by its AI chatbot.⁸

Forging Ahead Safely

Artificial intelligence presents a major opportunity to alleviate the immense workforce pressures in the Canadian health sector. To safely harness the potential of AI in healthcare, organizations developing and deploying the technology should take a risk-conscious approach, ensuring that these technologies serve to enhance patient care in a secure and ethical manner.
 

Interested in this topic? We’ll be hosting an entire session on AI at our Annual Conference.
 

Join us on April 22, 2024 – Register today!

 


References:

  1. Statistics Canada – Study: Quality of employment of health care workers during the COVID-19 pandemic (August 2023)
  2. Health Policy – The impact of electronic health record systems on clinical documentation times: A systematic review (Baumann et al, August 2018)
  3. Science – Experimental evidence on the productivity effects of generative artificial intelligence (Noy & Zhang, July 2023)
  4. National Cyber Security Centre (UK) – The near-term impact of AI on the cyber threat (January 2024)
  5. International Standards Organization (ISO) 31700-1:2023 (January 2023)
  6. Government of Canada – The Artificial Intelligence and Data Act (March 2023)
  7. European Parliament – EU AI Act: first regulation on artificial intelligence (June 2023)
  8. CBC – Air Canada found liable for chatbot's bad advice on plane tickets (February 2024)