Does President Trump’s revoking of the 2023 executive order that mandated safety disclosures from AI companies to the federal government drive a stake into the heart of the UK’s recently announced AI Action Plan or embolden it?
Promodo
January 30, 2025
The recent decision by President Trump to revoke the 2023 executive order mandating safety disclosures from AI companies to the federal government has sparked intense debate. Does this move undermine the UK’s ambitious AI Action Plan, or could it inadvertently bolster the UK’s position as a global leader in AI governance?
Having worked with some of the world’s leading tech companies—many of them US-based—I have seen firsthand the tension between technical and business priorities. Tech teams often aim for near-perfection, testing solutions until they are 99% confident in their reliability. Business leaders, by contrast, often prioritise “time to value”—getting products to market as quickly as possible, even if they are just “good enough.” For the latter, customers effectively become unwitting participants in final testing, fault-finding, and feature enhancement.
Trump’s revocation of the executive order aligns squarely with this “good enough” mindset, removing requirements for safety and transparency to accelerate AI innovation. This decision stands in stark contrast to the UK’s newly announced AI Action Plan, which emphasises “enabling safe and trusted AI development through regulation, safety, and assurance.” It also diverges sharply from the EU’s 2024 AI Act, widely regarded as the most comprehensive framework for AI governance to date.
Divergent Reactions and Implications for Business
Unsurprisingly, reactions to these changes have been mixed. In the US, some companies welcome the reduced regulatory oversight as an opportunity to fast-track their AI products to market. Others worry it could weaken the global competitiveness of US-made AI solutions, particularly in more regulated markets like the UK and the EU.
The UK government’s commitment to AI governance comes at a pivotal moment. To succeed, however, it must move beyond fine words and grand ambitions. Delivery at pace and scale is the only true measure of success. Past government initiatives—promising world-class infrastructure and solutions—have often fallen short in execution. This time, there is no room for delays or political distractions, especially for public-sector organisations that are being actively encouraged to incorporate AI adoption objectives into Local Growth Plans within the next 12 months.
The stakes are high. Countries like the US and China are moving rapidly to capitalise on AI advancements. The UK must position itself as a leader in safe and ethical AI adoption while fostering innovation. The government’s claim that AI will deliver the “precious gift of time” to frontline workers must be backed by tangible results. The focus must be on enabling both public and private sector organisations to adopt AI responsibly and effectively.
Preparing UK Organisations for the AI Revolution
For UK businesses, the priority is clear: readiness. Organisations must take proactive steps to prepare for the wave of AI-driven initiatives and solutions that will inevitably shape the market. Here are some key actions to consider:
- Educate Leadership: Ensure key decision-makers have a foundational understanding of AI. This knowledge is critical for making informed decisions about AI investments and strategies.
- Define AI Ambitions: Develop a clear AI strategy that identifies areas where AI can add value and areas where its adoption should be more cautious.
- Establish Governance: Create robust governance structures to oversee AI initiatives. This includes ethical guidelines, risk assessments, and compliance protocols.
- Choose the Right Partners: Collaborate with technology providers who prioritise safety, ethics, and transparency in their AI solutions.
- Develop a Long-Term Plan: AI is not a one-time investment. Organisations need a roadmap for integrating AI into their operations over the coming years.
These are addressed more fully in PromodoAI’s ten guiding principles for successful, safe, and ethical AI development and implementation.
The Procurement Challenge
A critical challenge for UK organisations will be procuring AI solutions that are ethical, safe, and secure. While many overseas companies supply AI-enhanced products to the UK, not all may adhere to the same safety and ethical standards. Organisations must carefully specify their requirements and ensure they have the expertise to evaluate potential solutions effectively.
It is worth noting that the “innovate-first, safety-later” approach will not be exclusive to the US. Other countries may adopt, perhaps are already adopting, similar strategies. UK organisations must decide whether they are willing to be test beds for these solutions or whether they require stricter assurances before deployment. The UK government’s focus on AI governance may help address these concerns, but it may not. Businesses, and in particular, public-sector organisations, must take responsibility for their own readiness.
Conclusion
Trump’s revocation of AI safety disclosures highlights a fundamental difference in approach between the US and the UK. While the US seeks to accelerate innovation at all costs, the UK’s emphasis on safe and trusted AI development offers a more measured and sustainable path forward. However, the success of the UK’s AI Action Plan hinges on swift and effective implementation.
For businesses, the message is clear: the AI revolution is here, and readiness is key. By educating leaders, defining clear ambitions, establishing governance, and partnering wisely, UK organisations can position themselves to thrive in this transformative era. The question is not just whether the UK can lead in AI governance but whether its businesses are prepared to harness AI’s potential responsibly and effectively. Are you ready?
Steve Peel