The World Health Organization (WHO) has advised being cautious when using artificial intelligence (AI) generated large language model tools (LLMs) in public healthcare due to concerns about potential biases and misuse of data. While acknowledging the positive aspects of AI, WHO specifically raised concerns about its application in improving health information access, as a decision-support tool, and in diagnostic care.
According to WHO, the data used to train AI systems may contain biases, leading to inaccurate information, and the models themselves could be misused to spread disinformation. The organisation stressed the importance of evaluating the risks associated with using large language model tools like ChatGPT to ensure the protection of human well-being and public health.