Key Takeaways
- The U.S. government is rolling out ChatGPT Enterprise across federal agencies under a $1-per-agency deal with OpenAI.
- The deployment includes training, onboarding support, and data safeguards to promote secure use.
- The initiative aligns with the White House’s AI Action Plan to modernize public service with responsible AI.
- Mounting evidence is hinting at numerous risks, including privacy gaps, legal gray areas, and overreliance on AI tools.
The U.S. General Services Administration (GSA) announced on Tuesday a major partnership with OpenAI to roll out ChatGPT Enterprise, as part of a broader push to integrate artificial intelligence (AI) into government operations.
The agreement, formed under GSA’s OneGov platform, will provide federal agencies with access to ChatGPT Enterprise at a heavily discounted rate of $1 per agency for one year. The deal also includes a 60-day period of unrestricted access to OpenAI’s advanced language models.
Additionally, OpenAI will provide onboarding support, including tailored training programs, a user community for federal employees, and access to partner-led learning platforms. The deployment is expected to help agencies adopt AI tools more securely and responsibly.
According to officials, this partnership aligns with the White House’s America’s AI Action Plan and aims to enhance productivity, streamline decision-making, and improve service delivery across the federal workforce.
AI Privacy Concerns Deepen Amid Data Exposure, Legal Loopholes
The integration of AI into government operations has raised a number of concerns related to data security, algorithmic accountability, and overreliance on automated systems in public decision-making.
These fears appear increasingly justified amid mounting evidence of AI-related risks, including last week’s reports of ChatGPT conversations appearing in Google search results after users shared them using the platform’s Share feature.
An investigation by Fast Company found that thousands of ChatGPT links, originally shared through the messaging feature or saved for later reference, had become visible in search results.
Some of the conversations contained sensitive information, including references to addiction, abuse, and mental health issues. While user names were not displayed, several chats included specific personal details that could identify individuals.
Adding to privacy concerns, OpenAI CEO Sam Altman recently acknowledged that conversations with AI are not protected under legal privilege, stating during a podcast appearance:
If you talk to a therapist or a lawyer or a doctor about those problems, there’s legal privilege for it. There’s doctor-patient confidentiality, there’s legal confidentiality, whatever. And we haven’t figured that out yet for when you talk to ChatGPT.
Unlike communications with medical or legal professionals, AI interactions fall outside established confidentiality frameworks.
This raises significant risks for users who may mistakenly assume their conversations are completely private, especially as AI tools are being increasingly adopted across sensitive areas, like healthcare, legal aid, and government services.
Final Thoughts: It’s Not a Race, It’s a Marathon
While the U.S. government’s partnership with OpenAI signals a major step toward modernizing public services, experts caution that rapid AI adoption brings unresolved risks, particularly around data privacy, legal protections, and transparency.
As more countries move to embrace automation, the real challenge won’t be which nations move first, but which nation can build the clearest guardrails to protect public trust while still harnessing the full potential of the technology.
Read More: Are Your Cognitive Skills Affected by ChatGPT Influence?