Dane Stuckey, the previous CISO of analytics agency Palantir, has joined OpenAI as its latest CISO, serving alongside OpenAI head of safety Matt Knight.
Stuckey introduced the transfer in a put up on X Tuesday night.
“Safety is germane to OpenAI’s mission,” he mentioned. “It’s crucial we meet the best requirements for compliance, belief, and safety to guard lots of of hundreds of thousands of customers of our merchandise, allow democratic establishments to maximally profit from these applied sciences, and drive the event of protected AGI for the world. I’m so excited for this subsequent chapter, and may’t wait to assist safe a future the place AI advantages us all.”
Stuckey began at Palantir in 2014 on the data safety group as a detection engineering and incident response lead. Previous to becoming a member of Palantir, Stuckey spent over a decade in numerous industrial, authorities, and intelligence group digital forensics, incident detection/response, and safety program improvement roles, in accordance to his weblog.
Stuckey’s work at Palantir, an AI firm wealthy in authorities contracts, may maybe assist advance OpenAI’s ambitions on this space. Forbes studies that, by its companion Carahsoft, a authorities contractor, OpenAI is searching for to ascertain a better relationship with the U.S. Division of Protection.
Because it lifted its ban on promoting AI tech to the navy in January, OpenAI has labored with the Pentagon on a number of software program initiatives, together with ones associated to cybersecurity. It’s additionally appointed former head of the Nationwide Safety Company, retired Gen. Paul Nakasone, as a board member.
OpenAI has been beefing up the safety facet of its operation in latest months.
A number of weeks in the past, the corporate posted a job itemizing for a head of trusted compute and cryptography to steer a brand new group centered on constructing “safe AI infrastructure.” This infrastructure would entail capabilities to guard AI tech, safety instrument evaluations, and entry controls “that advance AI safety,” per the outline.