AI chatbots ‘lack safeguards to prevent spread of health disinformation’
by Martyn Landi
Mar 20, 2024
2 minutes
Many popular AI chatbots, including ChatGPT and Google’s Gemini, lack adequate safeguards to prevent the creation of health disinformation when prompted, according to a new study.
by a team of experts from around the world, led by researchers from Flinders University in, , and published in the British Medical Journal (BMJ) found that the large language models (LLMs) used to power publicly accessible chatbots failed to block attempts to create realistic-looking disinformation on health topics.
You’re reading a preview, subscribe to read more.
Start your free 30 days