Current location: Home> Ai News

​Google uses AI to block 2.36 million malicious applications on Google Play

Author: LoRA Time: 14 Feb 2025 567

As the number of Android users exceeds 3 billion, Google is under increasing pressure to protect user security. In order to deal with the increasingly rampant cyber attacks, Google announced that it will increase its investment in artificial intelligence (AI) to improve malware detection capabilities, strengthen privacy protection measures, and provide developers with more complete tools. Through these efforts, Google has successfully blocked 2.36 million policy-violating apps from being released on the Google Play Store over the past year. In addition, Google has banned more than 150,000 developers' accounts that intend to publish harmful applications.

Google Play, Google, App Store

Google says its advanced AI technology plays an important role in malicious application review, with more than 92% of malicious application reviews being assisted by AI, allowing Google to act more quickly and accurately to avoid harmful The application enters the user's mobile phone.

In addition to Google, other companies are also using artificial intelligence to resist cyberattacks. Omer Yoachimik, senior product manager at Cloudflare, mentioned that artificial intelligence and machine learning help them accurately detect and mitigate traffic anomalies, thus effectively resisting distributed denial of service (DDoS) attacks. Cloudflare designed autonomous defense systems can train and update a million models every day to identify and resist new DDoS attacks.

In 2024, Cloudflare's DDoS defense system successfully intercepted 21.3 million attacks, an increase of 50% over the previous year. During Halloween, they also successfully detected and blocked a large DDoS attack of 5.6 terabytes per second, creating new records without manual intervention throughout the process.

However, with the increase in the number of cyberattacks, the shortage of cybersecurity talents is becoming increasingly prominent. According to a survey report released by the International Information Systems Security Certification Alliance (ISC2), nearly 60% of respondents said their cybersecurity teams are not large enough to deal with existing threats. While 45% of teams have begun using generative AI tools to fill the skills gap, most respondents still believe AI tools need to be combined with the experience of human experts.

In the future, with the continuous development of the cybersecurity industry, the demand for experts in the field of AI security will further grow. Sudhakar Singh, chief AI security officer at SAP Labs India, said cybersecurity professionals need not only identify threats, but also design effective protection measures and assess the risks posed by complex systems.