With the rapid development of artificial intelligence technology, more and more developers are beginning to rely on AI to generate code, a trend that is particularly evident among the latest batch of startups in Y Combinator (YC), a famous startup accelerator of Silicon Valley. YC managing partner Jared Friedman revealed in a conversation recently published on YouTube that one quarter of startups in the Winter Batch of 2025 (W25) have 95% of their code bases generated by artificial intelligence.
Friedman clarified that this staggering 95% ratio does not include the code that imports the library, but refers specifically to the amount of core code input jointly by humans and AI. He stressed: "We are not funding a group of founders who don't understand technology. These people are skilled and have the ability to build products from scratch. They did do this a year ago, but now, 95% of the product code is done by AI."
In a video titled “Ambient Coding is the Future,” Friedman discusses the trend with YC CEO Garry Tan, Managing Partner Harj Taggar and General Partner Diana Hu. They mentioned that developers are gradually turning to using natural language and intuition to write code instead of the traditional way of typing the code line by line. Last month, Andrej Karpathy, former Tesla's artificial intelligence director and former OpenAI researcher, used the term "atmosphere coding" to describe this new way of coding based on large language models (LLM) where developers pay more attention to graphs than code details.
However, the code generated by AI is not flawless. Several studies and reports point out that code generated by artificial intelligence may introduce security vulnerabilities, causing application downtime or frequent errors, forcing developers to spend a lot of time debugging or modifying code. Hu pointed out in the discussion that even if the product is highly dependent on AI, developers still need to have a key skill - reading code and identifying errors. "You have to have taste and be trained enough to tell whether the output of LLM is good or bad. To do a good 'atmosphere coding', you still need knowledge and vision to distinguish between good or bad," she said.
Garry Tan agrees, adding that founders still need to undergo classic coding training to ensure the stability of the product in the long term development. “Suppose a startup with 95% code generated by AI successfully went public and has 100 million users in a year or two, will it crash? The current inference model is not strong enough in debugging. So, the founders must have a deep understanding of the product,” he suggests.
The AI coding boom has attracted widespread attention from venture capital firms and developers. In the past 12 months, startups focusing on AI encoding, such as Bolt.new, Codeium, Cursor, Lovable and Magic, have raised hundreds of millions of dollars in total. Tan commented on this: "It's not a flash in the pan, but a mainstream way of coding. If you don't keep up, you can be left behind."
As AI models become increasingly used in the field of coding, "atmosphere coding" not only changes the way developers work, but also brings new possibilities to technological entrepreneurship. However, how to find a balance between efficiency and quality is still a challenge that developers must face when dancing with AI.