The UK government has emphasized that while every country will ultimately need legislation to address challenges posed by AI, implementing new laws is not the appropriate approach at this time. In a recently released AI policy paper, the government suggests that legislation should only be considered once a comprehensive understanding of the risks associated with AI has matured, which is not currently the case. This stance diverges significantly from the EU’s approach under the AI Act.
The government’s response, stemming from consultations on its AI white paper from the previous year, outlines a non-statutory, context-based approach to AI regulation. In contrast to the EU’s broad risk-based strategy, the UK aims for agile regulation adaptable to emerging issues without hindering business innovation. Recognizing the rapid evolution of technology, the government refuses to enact hasty legislation or ‘quick fixes’ that could become obsolete.
The absence of an overarching framework governing AI use in the UK is acknowledged, despite existing legislation and regulations covering aspects such as data protection, consumer protection, product safety, equality, and financial services and medical devices. The current plan involves retaining a sector-based regulatory approach, with regulators urged to consider five cross-sector principles related to safety, transparency, fairness, accountability, and contestability.
While these principles will not be immediately placed on a statutory footing, the government plans to establish a central function to monitor AI risks across the entire UK economy. A steering committee, featuring government and regulator representatives, is also slated to be formed for knowledge exchange and coordination on AI governance.
The government will allocate £10 million in funding to assist regulators in developing capabilities and tools to respond to AI challenges. Collaboration with government departments and regulators will address potential gaps in existing regulatory powers related to AI.
Additional initiatives include the creation of a cross-economy AI risk register and a potential risk management framework, mirroring the approach taken by the National Institute of Standards and Technology in the US. The government also commits to promoting research and innovation, enhancing public trust in AI, and issuing introductory guidance on AI assurance.
The response paper hints at the future evolution of the UK’s AI regulation, suggesting that legislative intervention may occur in response to risks from highly capable generative AI systems. The government remains committed to managing risks at the forefront of AI development through international coordination and agreements forged with leading AI developers and governments.
In addressing AI-related risks, the government is collaborating with regulatory bodies to develop solutions for bias and discrimination, considering a cybersecurity code of practice for AI, and exploring AI-related risks to information trust, including ‘deepfakes’. Future requirements for AI product and service suppliers to meet minimum standards for public contracts are also under consideration.
Technology law experts stress the challenge of finding the right balance between regulating emerging risks and avoiding dampening effects on innovation. While the UK’s approach differs from the EU’s, with a focus on context-based regulation rather than broad risk-based strategies, there may be future legislative interventions specific to highly capable generative AI systems in the UK.