OpenAI Restructures Safety Research Following Leadership Departures.
Nonprofit AI firm OpenAI is reorganizing its approach to ensuring the safety of advanced systems, after key figures driving such work left the organization. It has dissolved the dedicated ‘superalignment’ team established less than a year ago to help guide the development of highly capable AI in a beneficial manner.
The move comes as Ilya Sutskever, OpenAI co-founder and former chief scientist, resigned along with Jan Leike, another veteran leading the superalignment research. Sutskever’s departure occurred after reported differences with CEO Sam Altman around the quickening development of artificial intelligence.
Leike’s exit also followed stated disagreements, and for Leike, Sutskever’s exit proved the final catalyst according to knowledgeable sources. In a statement, Leike said the now-defunct team struggled with resources.
OpenAI will integrate safety efforts more deeply rather than maintaining a separate unit, telling Bloomberg the aim is achieving safety goals. The Information previously reported on departures of additional alignment researchers.
Going forward, co-founder John Schulman will guide AI alignment work while Jakub Pachocki assumes Sutskever’s former chief scientist role. However, losing figures like Sutskever who pushed for safety leaves uncertainties, especially regarding building powerful AIs that are beneficial.
While the company says continued leadership under Altman can progress its mission helping humanity, others argue dedicated focus and resources remain vital to navigating associated complex challenges. OpenAI’s reorganization draws renewed examination of balancing development speed with necessary precautions.