Robert F. Kennedy Jr.’s ‘Make America Healthy Again‘ (MAHA) report was launched as a national initiative aimed at improving public health in the United States. Kennedy Jr.’s leadership faces backlash for including false citations and signs of AI-generated content. This incident has triggered concerns about misinformation, accountability, and the misuse of generative AI in policymaking.
In this article, we uncover what went wrong with the RFK Jr. MAHA health report, why AI slop is dangerous, and what this means for the future of public health documentation.
What Is the MAHA Health Report?
The MAHA health report is a federal initiative commissioned by RFK Jr. as part of his vision to improve American health outcomes, especially among children. It aimed to highlight chronic disease causes and recommend holistic health policies.
Key Objectives:
- Investigate rising rates of chronic illness in children.
- Recommend changes to vaccine schedules.
- Promote alternative and holistic health approaches.
However, the credibility of this report has come into question due to questionable sourcing.
What Is “AI Slop” in the Context of MAHA?
“AI slop” refers to poorly sourced, AI-generated content filled with inaccuracies, hallucinated facts, and unreliable citations. In the case of the MAHA report:
- Fake Citations: At least 7 sources cited in the report didn’t exist.
- AI Markers Detected: URLs in footnotes included “oaicite,” indicating use of ChatGPT.
- Recycled Sources: Over 37 citations were duplicated.
- Incorrect Details: Author names, journal titles, and volume numbers were wrong.
This kind of AI misuse undermines trust in government-backed science.
RFK Jr.’s Response and White House Statement
When questioned, White House Press Secretary Karoline Leavitt claimed these were just “formatting issues” and not deliberate errors. She stated the report was based on “good science not previously recognised by the federal government.”
Despite these claims, experts argue that the signs of AI involvement are hard to ignore and could mislead public policy.
Why AI Slop in Health Policy Is Dangerous
Using AI without human oversight in public health can lead to serious consequences:
- Public Misinformation: Fake studies can shape harmful policies.
- Reduced Trust: Health institutions may lose public confidence.
- Legal Risks: Reliance on fabricated references can lead to lawsuits.
The problem isn’t AI itself, it’s how it’s being used. Proper vetting and transparency are critical.
Generative AI in Government Reports: A Double-Edged Sword
The MAHA controversy highlights a growing issue: the use of generative AI in government. These tools can indeed help streamline data analysis and drafting, but they are not foolproof. Errors like ChatGPT citation hallucinations show how quickly misinformation can enter official documents.
Governments must establish strict guidelines for AI-generated content. Without human verification, the risk of publishing inaccurate or misleading information rises significantly. The MAHA report sets a troubling precedent if not properly addressed.
Health Policy Transparency Demands Accountability
Using fake citations in a public health report highlights a major failure in maintaining transparency and credibility. When institutions publish policy documents, they carry the weight of public trust. Errors like AI-generated misinformation and ChatGPT citation errors can damage credibility beyond repair.
Moving forward, federal reports must adopt a zero-tolerance policy for unverifiable sources. Editorial review, transparent sourcing, and AI disclosure are essential to maintain health policy transparency.
Frequently Asked Questions
Was ChatGPT used to write the MAHA report?
Evidence like “oaicite” in the citations strongly suggests that ChatGPT or similar AI was used.
How many fake citations were found in the RFK Jr. MAHA health report?
At least 7 fully fabricated studies and 37 recycled citations were identified.
What is RFK Jr.’s stance on AI?
RFK Jr. supports the use of AI in health data management, but did not directly address its misuse in this report.
Is the MAHA report still being used?
Although some fake citations have been replaced, the main recommendations remain unchanged.
Final Thoughts
This situation is a cautionary tale for any institution using AI for official documentation. Transparency, human oversight, and rigorous fact-checking remain non-negotiable, especially when public health is on the line.
Following Google’s E-E-A-T principles, this blog aims to inform with accuracy, authority, and accountability. Stay tuned for more insights on how AI is shaping (and sometimes shaking) our institutions.
Read More: Lower-Cost AI Chipset For China