Recently, Miles Brundage, former head of policy research at OpenAI, criticized the company's narrative change in AI security, saying OpenAI is rewriting its history of security in AI systems. He said OpenAI may ignore long-term security measures in its pursuit of general artificial intelligence (AGI).
OpenAI has been committed to promoting its ambitious vision, especially in the context of growing rise of competitors such as DeepSeek. In the process of pursuing AGI development, the company frequently emphasizes the potential of super artificial intelligence agents, but this attitude has not been widely recognized. Brendage believes that OpenAI has inconsistent narratives with the deployment and security of its existing AI models.
Recently, OpenAI released a document on the gradual deployment of its AI model to demonstrate its prudent approach. Taking GPT-2 as an example in the document emphasizes that a high degree of caution should be maintained when dealing with the current system. OpenAI mentioned in the documentation: "In a discontinuous world, security lessons come from a high degree of caution with today's systems, which is exactly the approach we take on the GPT-2 model."
However, Brendage questioned this. He believes that the release of GPT-2 also follows a gradual approach, and security experts have praised OpenAI's cautious handling. He believes that past prudence was not excessive, but necessary and responsible practice.
Additionally, Brendage expressed concerns about OpenAI's view that AGI will go through gradual steps rather than sudden breakthroughs. He believes that OpenAI's misunderstanding of the history of GPT-2 release and its re-narrative of security history are disturbing. He also noted that documents published by OpenAI may somehow cause security concerns to be seen as overreaction, which can pose significant risks in the context of the ever-evolving AI systems.
This is not the first time OpenAI has been criticized, with experts questioning whether companies have made a reasonable trade-off between long-term security and short-term benefits. Brendage's concerns have once again attracted attention to AI security.