Navigating Radiotherapy Ethics with Generative AI: Geoffrey Hinton's Warnings Regarding Relying Totally on Insights from AI Models πŸ“

Author: James Chun Lam Chow, Kay Li πŸ‘¨β€πŸ”¬

Affiliation: University of Toronto, Princess Margaret Cancer Centre 🌍

Abstract:

Purpose: This study explores the caution needed when using Generative AI for assessing radiotherapy ethics, highlighting Geoffrey Hinton’s warnings about the risks of relying solely on AI for ethical decisions. Responses from ChatGPT, Microsoft Copilot, and Google Gemini were analyzed to evaluate the viability of using AI to assess ethical decisions for radiation physics, and the necessary guidelines, protocols, and regulations required to maintain human control.
Methods: Five ethical questions regarding radiotherapy were posed to Bot1, Bot2, and Bot3. Their responses were evaluated using a weighted scoring method based on clarity, depth, practicality, and alignment with ethical principles. The evaluation considered Hinton’s warnings, identifying potential AI hallucinations and manipulations, and when the Generative AI begins to make its own ethical decisions.
Results: Bot3 scored the highest (43.6) due to its comprehensive and actionable responses, addressing ethical issues with depth and practicality. Bot1 scored 42.5, performing well in clarity and ethical alignment but lacking depth. Bot2 scored the lowest (36.5), providing general and neutral responses with less specific guidance. The scores may be influenced by subjective scoring and evaluator bias, highlighting the need for rigorous training of the Generative AI, and the development of clear radiation ethics guidelines from government, profession, and industry. The guidelines are especially needed in the transition from human vetting to Generative AI vetting of radiation ethics.
Conclusion: While Generative AI can aid assessing ethical decision-making in radiotherapy by providing clear and ethically aligned responses, they have to be trained and checked. Bot3’s superior performance suggests that advanced AI models can navigate ethical complexities, improving fairness and consistency in clinical practice. However, systemic support is essential to ensure these models are adequately trained. Addressing Hinton’s concerns, it is crucial to mitigate risks of AI hallucinations and manipulations by establishing robust guidelines and protocols.

Back to List