A growing coalition of advocacy groups is demanding that the U.S. government immediately sever its ties with xAI, the artificial intelligence company behind the Grok chatbot, following mounting evidence of serious safety failures. The call comes as international scrutiny intensifies over Grok’s alleged role in generating and distributing harmful content, including child sexual abuse material (CSAM) and deepfakes.
Federal Contracts Under Fire
Last year, the U.S. General Services Administration (GSA) approved a contract allowing federal agencies to access Grok. Since then, the Department of Defense, the Pentagon, and even the Department of Health and Human Services have reportedly begun using the chatbot. This has raised alarm among digital safety advocates, who claim the platform lacks adequate safeguards against misuse.
JB Branch, one of the letter’s authors, stated that Grok has “consistently shown to be an unsafe large language model” and has a documented history of generating hateful content, including antisemitic and sexist rants, as well as sexually explicit imagery of minors. The coalition is now urging the Office of Management and Budget (OMB) to investigate these failures and decommission Grok’s use across federal agencies.
Global Backlash and Regulatory Pressure
The concerns are not limited to the U.S. India, France, the United Kingdom, and the European Union have all launched official investigations into Grok’s deepfake capabilities. California Attorney General Rob Bonta sent xAI a cease and desist letter, alleging violations of state decency laws and new AI regulations.
“These platforms are not just failing to protect children; they are actively enabling the spread of illegal and exploitative content.”
Indonesia Reinstates Access with Conditions
Indonesia initially banned access to Grok while awaiting xAI’s response to safety concerns. However, on February 1st, the country lifted the ban after receiving assurances from Musk’s company that it had implemented new safety measures. The Indonesian Ministry of Communication and Digital Affairs will continue monitoring Grok’s guardrails and has warned that the ban could be reinstated if further misuse occurs.
The Scale of the Problem
According to a report by the Center for Countering Digital Hate (CCDH), Grok generated an estimated 3 million sexualized images – including those depicting children – over just 11 days. This alarming figure underscores the urgency of addressing the platform’s safety failures.
The controversy highlights the broader risks associated with rapidly deployed AI technologies and the need for stricter regulatory oversight. The future of Grok’s use in government and international markets now depends on whether xAI can demonstrate a verifiable commitment to user safety and content moderation.
