Study uncovers that current safeguards against health disinformation in large language models (LLMs) are applied inconsistently, and AI developers lack transparency in their risk mitigation strategies, highlighting a crucial need for more stringent regulation and comprehensive auditing processes.