ODI identifies risks to AI boom due to UK tech weaknesses

The Open Data Institute (ODI) has issued a stark warning about the future of AI in the UK, highlighting significant weaknesses in the country's technology infrastructure that could undermine the anticipated benefits of the AI boom.

The findings are detailed in the ODI's latest whitepaper, "Building a Better Future with Data and AI," based on research conducted in early 2024.

The report identifies several critical issues, including the shortage of high-quality data and the lack of robust governance frameworks, which pose risks to both AI adoption and deployment. These weaknesses threaten the potential economic, social, and individual gains that AI could bring.

Nigel Shadbolt, Executive Chair and Co-founder of the ODI, stressed the importance of a strong data ecosystem for AI development:

“If the UK is to benefit from the extraordinary opportunities presented by AI, the government must look beyond the hype and attend to the fundamentals of a robust data ecosystem built on sound governance and ethical foundations. We must build a trustworthy data infrastructure for AI because the feedstock of high-quality AI is high-quality data.”

The ODI's whitepaper outlines five key actions for the new government to harness AI's potential while mitigating risks:

  1. Ensure broad access to high-quality, well-governed public and private sector data to foster a diverse, competitive AI market.

  2. Enforce data protection and labour rights in the data supply chain.

  3. Empower individuals to have a say in how their data is used for AI.

  4. Update intellectual property laws to ensure AI models are trained in ways that build trust and empower stakeholders.

  5. Increase transparency around the data used to train high-risk AI models.

These recommendations aim to address the challenges and risks associated with AI, particularly those related to generative AI, which relies heavily on a limited number of machine learning datasets. The ODI's research indicates that these datasets often lack robust governance measures, leading to biases and unethical practices that could undermine trust in AI applications across critical sectors like healthcare, finance, and public services.

The ODI is also developing an "AI data transparency index" to provide clearer insights into how data transparency varies among system providers. This initiative is part of the broader effort to improve transparency and address biases in AI systems.

The report further stresses the need for safeguards to prevent the misuse of personal data in training AI models. Privacy-enhancing technologies could play a crucial role in protecting individuals' rights and privacy as AI becomes more prevalent. 

It also highlights the urgent need to update intellectual property laws to protect the UK's creative industries from unethical AI training practices and the importance of legislation to safeguard labour rights in the context of AI.

The Labour Party, in its pre-election manifesto, proposed the establishment of a National Data Library to consolidate existing research programmes and enhance data-enabled public services. However, the ODI emphasises that ensuring this data is AI-ready, accessible, and trustworthy is crucial for realising these plans.

Additional insights from the ODI's research include:

  • The need for safeguards against the illegal use of personal data in AI training.

  • The importance of transparency about data sources, copyright, and inclusion of personal information.

  • The necessity of updating intellectual property laws to protect creative industries.

  • The critical role of legislation in safeguarding labour rights in the AI sector.

  • The impact of high-quality AI training data costs on innovation, particularly for small businesses and academic institutions.

Shadbolt concluded, “The UK has the opportunity to build better data governance systems for AI that ensure we are best placed to take advantage of technological innovations and create economic and social value whilst guarding against potential risks.”

Previous
Previous

Security Innovation creates CMD+CTRL Security and announces acquisition by Bureau Veritas

Next
Next

3M and Greenlight for Girls partner to enhance STEM education in the UK and Ireland