Recently, a netizen discovered on social platform X that when ChatGPT's latest version of GPT-4o tried to generate an image of a rose, he refused, claiming that "I cannot generate an image of this rose because it fails to comply with our content policy." This unexpected rejection quickly attracted the attention and discussion of many netizens, and many people began to explore the reasons and even tried to find ways to bypass this limitation.
In order to verify this phenomenon, netizens have conducted a series of experiments. Some users ask in Chinese and English, and even try to replace "rose" with special symbols, but they all ended in failure. Even if a request was made to generate a yellow rose, it was rejected the same. Instead, try other flowers, such as peonies, GPT-4o can complete tasks effortlessly, showing that there is no problem with its image generation ability.
As the discussion deepened, netizens made various speculations. Some believe that the term "rose" may be blacklisted for some reason; some netizens believe that ChatGPT may understand "rose" as some obscure hint, which makes it impossible to generate. What’s more interesting is that some users joked that too many similar requests during Valentine’s Day may lead to the system’s restrictions on “rose”.
In the following tests, netizens found that if the "rose" is replaced with a plural form, or if the "rose" is not directly mentioned, but instead describe its characteristics, GPT-4o can successfully generate an image of the rose. This discovery triggered an in-depth discussion of the AI content filtering mechanism, which believed that perhaps it was because developers hard-coded content policies, which led to this strange misjudgment phenomenon.
At the same time, this taboo word of ChatGPT has triggered people's associations. Similar situations have occurred before. When users ask for certain specific names, the system will also respond vaguely, showing avoidance of specific content. Although "generating a rose" has become a new taboo, other AI chatbots such as Gemini and Grok can still successfully generate rose images, showing the differences in content restrictions between different platforms.
As this incident fermentes, people have raised more questions about the rationality and transparency of AI content censorship, and they also hope that future AI can meet the diverse needs of users within a reasonable range.