The U.S. Federal Trade Commission (FTC) recently announced to the public that it has transferred its legal complaint against Snap to the U.S. Department of Justice (DOJ). The FTC pointed out that Snap's artificial intelligence chatbot may cause "risks and harm" to young users, and said Snap may be violating or about to violate relevant laws.
Snap is a popular social app among young people, especially teenagers. With the development of technology, many social media platforms have launched AI chatbots to improve user interaction experience. However, the FTC expressed serious concerns about Snap's new feature and believed that its potential negative impacts cannot be ignored. The Commission believes that the chatbot may unknowingly provide inappropriate or misleading information to users, thereby endangering users' safety and mental health.
The FTC's statement emphasized the importance of protecting young users. Due to the vulnerability of adolescents at this stage of psychological and social development, the committee called on Snap to conduct stricter scrutiny and regulation of the capabilities of its AI chatbot. The FTC believes that Snap's practices may constitute a violation of consumer protection laws and therefore decided to hand over the case to the DOJ for more in-depth investigation and potential criminal prosecution.
This incident has attracted widespread attention, especially in the context of the increasing popularity of social media, how to ensure the safety of young users has become an important issue. Snap has not responded to the FTC's statement, but industry experts and commentators generally believe that social media platforms must pay more attention to user protection and mental health issues when launching new features.