Generative artificial intelligence (AI) has the potential to revolutionize healthcare by enabling advancements in areas such as drug development and quicker diagnosis. However, the World Health Organization (WHO) has issued a warning about the potential pitfalls of rushing to embrace AI in healthcare. As the WHO examines the benefits and dangers of large multi-modal models (LMMs) in healthcare, it aims to provide guidance on the ethical and safe use of this technology.
In generative AI, algorithms are trained on data sets to produce new content. LMMs, a type of generative AI, have the ability to process multiple types of data input, including text, images, and video, and generate outputs that are not limited to the type of data fed into the algorithm. This ability to mimic human thinking and behavior and engage in interactive problem-solving has significant implications for healthcare.
The WHO has identified five broad areas where LMMs could be applied in healthcare. These areas include diagnosis, where LMMs can respond to patients’ written queries; scientific research and drug development; medical and nursing education; clerical tasks; and patient-guided use, such as investigating symptoms. The potential benefits of LMMs in these areas are immense.
However, the WHO also highlights the risks associated with LMMs. There is a possibility that LMMs could produce false, inaccurate, biased, or incomplete outcomes. These models might be trained on poor-quality data or data containing biases related to race, ethnicity, ancestry, sex, gender identity, or age. The WHO warns that errors, misuse, and harm to individuals are inevitable as LMMs gain broader use in healthcare and medicine.
One concern is the potential for “automation bias,” where users blindly rely on the algorithm without critically evaluating its outputs. To address these risks, the WHO has issued recommendations on the ethics and governance of LMMs. It emphasizes the need for transparent information and policies to manage the design, development, and use of LMMs in healthcare.
The WHO also stresses the importance of liability rules to ensure that individuals harmed by an LMM are adequately compensated or have other forms of redress. Additionally, the role of tech giants in the development of LMMs is highlighted. While AI has been used in public health and clinical medicine for over a decade, LMM formats present new risks that societies and health systems may not be fully prepared to address. The significant resources required to develop LMMs often result in their development by tech giants, potentially entrenching their dominance in the healthcare sector.
To mitigate these risks, the WHO recommends that LMMs should be developed collaboratively, involving not just scientists and engineers but also medical professionals and patients. Governments must ensure privacy when sensitive health information is used as data and provide individuals with the option to opt out of involvement.
Furthermore, the WHO cautions that LMMs are vulnerable to cybersecurity risks that could compromise patient information and the trustworthiness of healthcare provision. It is crucial to address these risks and ensure the safe and ethical use of LMMs in healthcare settings.
In conclusion, generative AI, specifically LMMs, holds great potential for transforming healthcare. However, it is essential to approach their implementation with caution and address the associated risks. The WHO’s recommendations on the ethics and governance of LMMs provide valuable guidance for governments, tech firms, and healthcare providers. By proactively managing these risks and ensuring transparency, privacy, and inclusivity, the full benefits of generative AI can be realized in healthcare while minimizing potential pitfalls.
Source: The Manila Times