As artificial intelligence becomes more embedded across sectors, governments respond with new regulations to manage risks while enabling innovation. However, complex and fragmented regulatory approaches could undermine trust in AI among key decision-makers. This analysis compares emerging AI governance laws and regulations in the EU, US, and UK, specifically examining their potential impact on trust for executives, managers, and workers adopting AI systems.
The EU’s AI Act categorises risks and sets rules to protect rights while enabling innovation. The US has an AI Bill of Rights and order for safe AI, but no comprehensive laws yet. The UK takes a pro-innovation approach with guidelines for responsible AI use overseen by existing regulators.
BUILDING TRUST IN AI
EU: The EU Act promotes accountability and transparency in AI. This can help executives trust AI more through audits to check processes. Managers have duties around monitoring systems to ensure progress and compliance. Restrictions on problematic AI protect workers while allowing innovation, although some uses could still undermine rights.
US: Over 90 per cent of AI executives say that AI improves their decision-making confidence, but others lag. Academic research shows that ethics shape trust in AI. Companies would use AI more with guidelines for fairness, explainability, and privacy. However, common values across industries do not yet exist.
In the UK, the rules want companies to feel good about using