AI regulation aims to identify and mitigate ethical challenges and security risks related to AI. Regulation can also accelerate AI adoption by providing legal certainty and enhanced public trust. GlobalData, a data and analytics company, has identified three regulatory approaches to AI. The EU’s approach aims to protect the consumer. The US approach prioritizes protecting businesses, particularly tech companies, while in countries like China, AI regulation aims to protect the government.
GlobalData’s latest Strategic Intelligence report, “AI Global Regulatory Landscape,” analyzes the different approaches to AI regulation worldwide. The framework highlights two critical dimensions shaping these approaches. The first refers to the strength of AI regulation, defined as the extent to which a country has introduced statutory regulation on AI (strong regulation) or opted for an approach that emphasizes self-regulation, or the idea that organizations should be able to safeguard AI without government intervention (weak regulation).
The second dimension is the focus of AI regulation. It has three categories based on whether the regulation aims to protect consumers, promote business innovation, or support the government’s agenda.
Laura Petrone, Principal Analyst, Strategic Intelligence team at GlobalData, comments, ”The common challenge across the different approaches is ensuring regulations remain relevant to a technology that is evolving quickly while not hampering innovation. However, there is no evidence that a higher level of regulation is detrimental to innovation, as both the EU and China are key players in AI while shaping the AI regulatory agenda. What is true is that legal certainty is paramount for companies that need to make investment decisions on AI, and a lack of regulation could discourage investments in the long run.”
The next few years will be decisive for the EU’s enforcement of its AI Act, which could demonstrate to the world that a risk-based approach targeting general-purpose AI models can work. GlobalData’s analysis shows that this approach is inspiring AI legislation in Australia, Brazil, and Canada, where a stronger, consumer-oriented regulation is being discussed or proposed.
Petrone concludes, “There will be increased divergence between the US and the EU on AI regulation, with the EU’s vision on this and other tech regulations perceived by the Trump administration as attacks on US Big Tech companies. As opposed to the dynamic trajectories of most of the countries analyzed, China, the UAE, and Saudi Arabia appear to be static in their future direction of travel. These countries’ visions of AI regulation are shaped and led by the government to protect its interests and ensure the continuity of their regimes.”

