When it comes to artificial intelligence (AI) regulation, the British government has expressed its desire to take an independent stance and plans to take a different approach from its major Western counterparts such as the European Union and the United States. Feryal Clark, the British Minister of AI and Digital Government, emphasized in an interview with CNBC that the UK must "do its own thing" to ensure that necessary supervision is carried out early on in terms of the safety of AI models.
Clark mentioned that the British government has established good relationships with some AI companies (such as OpenAI and Google DeepMind), which voluntarily open their models to the government for safety testing. "Safety needs to be built into the model very early on in its development, so we will work with industry to develop relevant safety measures," she said.
This view was supported by British Prime Minister Keir Starmer, who pointed out that after Brexit, the UK will have more freedom in regulation and can choose the regulatory model that best suits itself. Starmer said that while different regulatory models existed around the world, including those in the EU and the United States, the UK could choose the approach that best served its interests.
So far, the UK has not officially introduced laws targeting AI, instead relying on various regulatory agencies to manage it according to existing rules. This is in stark contrast to the European Union, which has introduced a comprehensive AI bill aimed at harmonizing rules for the technology. Meanwhile, the United States lacks any AI regulation at the federal level, instead employing a fragmented regulatory framework at the state and local levels.
Although the British government has promised to regulate "cutting-edge" AI models in 2022, it has not announced specific safety legislation details, saying that it will propose formal rules after consultation with industry. Chris Mooney, a partner at the law firm Marriott Harrison, believes that the UK’s “wait-and-see” attitude does not appear to be clear enough on AI regulation, so companies are dissatisfied and uneasy.
In terms of copyright, the British government has also begun to review the existing copyright framework to assess whether exceptions are needed when AI developers use the works of artists and media publications for model training. Sachin Dev Duggal, CEO of AI startup Builder.ai, expressed concern about the government's action plan, calling it "borderline reckless" to push forward with regulation without clear rules.
Still, some industry insiders believe the UK could adopt a more flexible approach to regulation. Russ Shaw, founder of Tech London Advocates, said the UK is working hard to find a "third way" in AI safety and regulation, that is, developing specific regulations based on different industries (such as finance and medical).
AI courses are suitable for people who are interested in artificial intelligence technology, including but not limited to students, engineers, data scientists, developers, and professionals in AI technology.
The course content ranges from basic to advanced. Beginners can choose basic courses and gradually go into more complex algorithms and applications.
Learning AI requires a certain mathematical foundation (such as linear algebra, probability theory, calculus, etc.), as well as programming knowledge (Python is the most commonly used programming language).
You will learn the core concepts and technologies in the fields of natural language processing, computer vision, data analysis, and master the use of AI tools and frameworks for practical development.
You can work as a data scientist, machine learning engineer, AI researcher, or apply AI technology to innovate in all walks of life.