Author: James Chun Lam Chow, Kay Li π¨βπ¬
Affiliation: University of Toronto, Princess Margaret Cancer Centre π
Purpose: This study explores the caution needed when using Generative AI for assessing radiotherapy ethics, highlighting Geoffrey Hintonβs warnings about the risks of relying solely on AI for ethical decisions. Responses from ChatGPT, Microsoft Copilot, and Google Gemini were analyzed to evaluate the viability of using AI to assess ethical decisions for radiation physics, and the necessary guidelines, protocols, and regulations required to maintain human control.
Methods: Five ethical questions regarding radiotherapy were posed to Bot1, Bot2, and Bot3. Their responses were evaluated using a weighted scoring method based on clarity, depth, practicality, and alignment with ethical principles. The evaluation considered Hintonβs warnings, identifying potential AI hallucinations and manipulations, and when the Generative AI begins to make its own ethical decisions.
Results: Bot3 scored the highest (43.6) due to its comprehensive and actionable responses, addressing ethical issues with depth and practicality. Bot1 scored 42.5, performing well in clarity and ethical alignment but lacking depth. Bot2 scored the lowest (36.5), providing general and neutral responses with less specific guidance. The scores may be influenced by subjective scoring and evaluator bias, highlighting the need for rigorous training of the Generative AI, and the development of clear radiation ethics guidelines from government, profession, and industry. The guidelines are especially needed in the transition from human vetting to Generative AI vetting of radiation ethics.
Conclusion: While Generative AI can aid assessing ethical decision-making in radiotherapy by providing clear and ethically aligned responses, they have to be trained and checked. Bot3βs superior performance suggests that advanced AI models can navigate ethical complexities, improving fairness and consistency in clinical practice. However, systemic support is essential to ensure these models are adequately trained. Addressing Hintonβs concerns, it is crucial to mitigate risks of AI hallucinations and manipulations by establishing robust guidelines and protocols.