OpenAI CEO Sam Altman says that OpenAI is working with the U.S. AI Security Institute, a federal authorities physique that goals to evaluate and tackle dangers in AI platforms, on an settlement to supply early entry to its subsequent main generative AI mannequin for security testing.
The announcement, which Altman made in a submit on X late Thursday night, was mild on particulars. But it surely — together with a related deal with the U.Okay.’s AI security physique struck in June — seems to be meant to counter the narrative that OpenAI has deprioritized work on AI security within the pursuit of extra succesful, highly effective generative AI applied sciences.
In Might, OpenAI successfully disbanded a unit engaged on the issue of creating controls to forestall “superintelligent” AI methods from going rogue. Reporting — together with ours — instructed that OpenAI solid apart the workforce’s security analysis in favor of launching new merchandise, finally resulting in the resignation of the workforce’s two co-leads, Jan Leike (who now leads security analysis at AI startup Anthropic) and OpenAI co-founder Ilya Sutskever (who began his personal safety-focused AI firm, Protected Superintelligence Inc.).
In response to a rising refrain of critics, OpenAI stated it might remove its restrictive non-disparagement clauses that implicitly discouraged whistleblowing and create a security fee, in addition to dedicate 20% of its compute to security analysis. (The disbanded security workforce had been promised 20% of OpenAI’s compute for its work, however finally by no means obtained this.) Altman re-committed to the 20% pledge and re-affirmed that OpenAI voided the non-disparagement phrases for brand new and present employees in Might.
The strikes did little to placate some observers, nonetheless — notably after OpenAI staffed the protection fee completely with firm insiders together with Altman and, extra not too long ago, reassigned a prime AI security government to a different org.
5 senators, together with Brian Schatz, a Democrat from Hawaii, raised questions about OpenAI’s insurance policies in a current letter addressed to Altman. OpenAI chief technique officer Jason Kwon responded to the letter right now, writing that OpenAI “[is] devoted to implementing rigorous security protocols at each stage of our course of.”
The timing of OpenAI’s settlement with the U.S. AI Security Institute appears a tad suspect in mild of the corporate’s endorsement earlier this week of the Way forward for Innovation Act, a proposed Senate invoice that will authorize the Security Institute as an government physique that units requirements and tips for AI fashions. The strikes collectively may very well be perceived as an try at regulatory seize — or on the very least an exertion of affect from OpenAI over AI policymaking on the federal stage.
Not for nothing, Altman is among the many U.S. Division of Homeland Safety’s Synthetic Intelligence Security and Safety Board, which offers suggestions for the “secure and safe improvement and deployment of AI” all through the U.S.’ vital infrastructures. And OpenAI has dramatically elevated its expenditures on federal lobbying this yr, spending $800,000 within the first six months of 2024 versus $260,000 in all of 2023.
The U.S. AI Security Institute, housed throughout the Commerce Division’s Nationwide Institute of Requirements and Know-how, consults with a consortium of firms that features Anthropic, in addition to massive tech companies like Google, Microsoft, Meta, Apple, Amazon and Nvidia. The trade group is tasked with engaged on actions outlined in President Joe Biden’s October AI government order, together with creating tips for AI red-teaming, functionality evaluations, danger administration, security and safety and watermarking artificial content material.