【】

This week, OpenAI's co-head of the "superalignment" team (which overlooks the company's safety issues), Jan Leike, resigned. In a thread on X (formerly Twitter), the safety leader explained why he left OpenAI, including that he disagreed with the company's leadership about its "core priorities" for "quite some time," so long that it reached a "breaking point."
The next day, OpenAI's CEO Sam Altman and president and co-founder Greg Brockman responded to Leike's claims that the company isn't focusing on safety.
Among other points, Leike had said that OpenAI's "safety culture and processes have taken a backseat to shiny products" in recent years, and that his team struggled to obtain the resources to get their safety work done.
SEE ALSO:Reddit's deal with OpenAI is confirmed. Here's what it means for your posts and comments."We are long overdue in getting incredibly serious about the implications of AGI [artificial general intelligence]," Leike wrote. "We must prioritize preparing for them as best we can."
Altman first responded in a repost of Leike on Friday, stating that Leike is right that OpenAI has "a lot more to do" and it's "committed to doing it." He promised a longer post was coming.
On Saturday, Brockman posted a shared response from both himself and Altman on X:
Tweet may have been deleted
After expressing gratitude for Leike's work, Brockman and Altman said they've received questions following the resignation. They shared three points, the first being that OpenAI has raised awareness about AGI "so that the world can better prepare for it."
"We've repeatedly demonstrated the incredible possibilities from scaling up deep learning and analyzed their implications; called for international governance of AGI before such calls were popular; and helped pioneer the science of assessing AI systems for catastrophic risks," they wrote.
The second point is that they're building foundations for safe deployment of these technologies, and used the work employees have done to "bring [Chat]GPT-4 to the world in a safe way" as an example. The two claim that since then — OpenAI released ChatGPT-4 in March, 2023 — the company has "continuously improved model behavior and abuse monitoring in response to lessons learned from deployment."
The third point? "The future is going to be harder than the past," they wrote. OpenAI needs to keep elevating its safety work as it releases new models, Brock and Altman explained, and cited the company's Preparedness Framework as a way to help do that. According to its page on OpenAI's site, this framework predicts "catastrophic risks" that could arise, and seeks to mitigate them.
Brockman and Altman then go on to discuss the future, where OpenAI's models are more integrated into the world and more people interact with them. They see this as a beneficial thing, and believe it's possible to do this safely — "but it's going to take an enormous amount of foundational work." Because of this, the company may delay release timelines so models "reach [its] safety bar."
Related Stories
- One of OpenAI's safety leaders quit on Tuesday. He just explained why.
- 3 overlapping themes from OpenAI and Google that prove they're at war
- When will OpenAI's GPT-4o be available to try?
"We know we can't imagine every possible future scenario," they said. "So we need to have a very tight feedback loop, rigorous testing, careful consideration at every step, world-class security, and harmony of safety and capabilities."
The leaders said OpenAI will keep researching and working with governments and stakeholders on safety.
"There's no proven playbook for how to navigate the path to AGI. We think that empirical understanding can help inform the way forward," they concluded. "We believe both in delivering on the tremendous upside and working to mitigate the serious risks; we take our role here very seriously and carefully weigh feedback on our actions."
Leike's resignation and words are compounded by the fact that OpenAI's chief scientist Ilya Sutskever resigned this week as well. "#WhatDidIlyaSee" became a trending topic on X, signaling the speculation over what top leaders at OpenAI are privy to. Judging by the negative reaction to today's statement from Brockman and Altman, it didn't dispel any of that speculation.
As of now, the company is charging ahead with its next release: ChatGPT-4o, a voice assistant.
Featured Video For You
OpenAI reveals its ChatGPT AI voice assistant
TopicsArtificial IntelligenceOpenAI
相关文章
You can now play 'Solitaire' and 'Tic
Google just added two new fun Easter eggs to its search results.。You can now play 。 Solitaire 。and 。 Ti2025-09-15- 鯽魚和莧菜這種兩種食物在本質上麵是沒有相衝的反應 ,即使是放在一起攝入也不會對自己的身體造成太大的影響 ,所以人們在吃的時候也不用過於去擔心 ,並且這兩種食物當中所含有的營養價值都是比較豐富 ,主要還是和蛋白2025-09-15
- 過敏性鼻炎屬於很煩人的疾病 ,雖然並不是什麽大問題,在表麵也看不出什麽變化,可是每次到一個特定的季節,過敏性鼻炎就開始發作,而且還不經常打噴嚏流鼻涕,比較常見的季節是在春天和秋天,冬天的時候鼻子裏麵是又2025-09-15
- 腸胃的健康也是每一個人都必須關注的健康問題,因為腸胃作為身體中重要的消化器官,如果出現任何腸胃問題的話 ,會影響到一個人身體正常的消化與吸收功能 。其中腸梗阻就是一種相對較為嚴重的腸胃問題,如果患者朋友被2025-09-15
Uber's $100M settlement over drivers as contractors may not be enough
UPDATE: Sept. 7, 2016, 4:41 p.m. EDT。 A ruling in a different case on Wednesday, Sept. 7 may have ch2025-09-15- 學會走路也是小寶寶正在健康成長的重要象征,所以家長們總是期待小寶寶能夠盡快學會走路 。對於正常的小寶寶來說 ,成長到一歲左右就會學會走路 ,隻不過學會走路的時間有早有晚 ,如果小寶寶走路時間過晚的話 ,家長們也2025-09-15
最新评论