OpenAI CEO Sam Altman recently launched a survey on social media platform X to ask users about their next open source development direction.
This move comes amid the context of OpenAI's major transformation, which is transforming its for-profit division into a public welfare company. Since getting Microsoft's investment, OpenAI's relationship with open source has changed significantly. Especially after the release of GPT-4, OpenAI gradually reduced its open source contribution and focused on smaller projects such as Whisper. Altman mentioned at the time that the suspension of open source was for security reasons, but he recently admitted that the strategy might be wrong, as competitors like Deepseek have released their V3 and R1 models.
In this survey, Altman asked: "For our next open source project, is it more useful to launch a smaller o3-mini model or to create the best model that can run on a phone?" As of now, o3- The mini model takes the lead in the voting, and the survey has 12 hours left to end.
Although ChatGPT and OpenAI's API services still take the lead in the industry, open source competitors have gradually emerged, with companies such as Meta, Deepseek, Alibaba and Mistral all launching open source models that can compete with OpenAI products.
xAI plans to release Grok2 as open source after launching Grok3. The launch of an open source o3-mini will provide users with a strong alternative without directly competing with OpenAI's high-end products, especially as GPT-4.5 is being tested and GPT-5 is about to be released soon.
This move does not mean that OpenAI will return to its original open source principles, but rather indicates that a completely closed strategy is no longer sustainable in a rapidly changing competitive environment. Jan Leike, who worked at OpenAI, recently expressed concerns about the company's restructuring, criticizing OpenAI for reducing its mission to "ensure AGI benefits all mankind" to smaller philanthropic initiatives in areas such as healthcare, education and science.
He believes that nonprofits should support a broader AI development initiative, including AI governance, security and adaptability research, and research on the impact of the labor market. Perhaps, the release of open source code could be a compromise that enables security researchers to better understand the operation of inference models.