
Is the U.S. Falling Behind in AI Regulation? A Global Perspective
Governments worldwide are fighting to keep up with the regulation of artificial intelligence as it continues to grow exponentially. In the United States, the regulatory landscape is a patchwork of executive orders, voluntary industry commitments, and state initiatives. This contrasts with more unified approaches in areas like the European Union (EU).
This article examines the current state of AI regulation in the U.S. compared with other countries. It explores the concerns about the US’s potential shortcomings while addressing the contrasting view that the US may not be falling behind.
The Current State of AI Regulation in the US
Federal initiatives
Currently, the US doesn’t have federal legislation that specifically addresses AI. Instead, it relies on existing laws, executive orders and voluntary industry agreements. For example, President Biden introduced Executive Order 14110 in October 2023 to promote safe and trustworthy AI development. This order required federal agencies to establish “chief AI officer” positions and set standards for AI use in critical infrastructure and cybersecurity.
However, in January 2025, President Trump rescinded this order and introduced Executive Order 14179: “Removing Barriers to American Leadership in Artificial Intelligence.” This directive aims to encourage AI development without perceived ideological biases. It also creates an action plan to sustain US AI leadership, emphasising the country’s economic competitiveness and national security.
Voluntary industry agreements
Many leading AI companies, including Amazon, Google, IBM and Meta, have committed to promoting safe and secure AI development. The agreed measures include conducting thorough security testing before releasing systems, implementing safeguards and collaborating to flag and manage risks.
For example, Google acknowledges its responsibility for “implementing appropriate human oversight, due diligence, and feedback mechanisms to align with user goals, social responsibility, and widely accepted principles of international law and human rights.”
State-level regulations
The lack of federal legislation means states have had to enact state-level AI laws. At least 12 states, including California, Colorado, and Illinois, have introduced or proposed such laws. Some have taken a bold approach with broad legislation, while others have taken smaller steps by regulating more discrete areas affected by AI.
For instance, California enacted a law in January 2024 requiring AI developers to publish information about how they train their models. Colorado also introduced an AI consumer protection bill in May 2024, requiring AI developers and deployers of high-risk AI systems to minimise the risks of algorithmic discrimination.
International Comparisons: The EU and Beyond
The EU’s AI Act
The EU’s approach to AI regulation is strikingly different from the US’s. Taking a more all-encompassing stance, the Artificial Intelligence Act came into force in August 2024 and classifies AI systems based on risk levels. The Act imposes strict requirements on high-risk applications and mandates transparency, data governance, and human oversight where appropriate.
Organisations will have two years to implement most of the Act’s provisions. However, some took effect in February 2025, including banning AI systems for specific uses and requiring companies to ensure employees have sufficient AI knowledge. The European Commission expects this unified framework to streamline regulatory oversight and enforcement while promoting a single AI market within the EU.
While the Act’s implementation is still in its early stages, companies in the EU or serving EU customers must proactively introduce measures to ensure compliance by next summer.
China’s AI Measures
China introduced the Interim Measures for the Management of Generative AI Services (“the AI Measures”) in August 2023. The measures aim to balance AI development and innovation with security by requiring enhanced supervision and data labelling. However, China has additional regulations that likely impact AI governance, including the “Recommendation Algorithms Provisions” and the “Deep Synthesis Provisions”. In September 2025, China will also introduce new labelling rules, meaning AI users must implicitly or explicitly label their content as AI-generated.
The UK’s Online Safety Act
While the United Kingdom doesn’t have specific AI legislation, the Online Safety Act 2023 addresses some associated risks. For example, the Act requires service providers to promptly remove illegal or harmful content from their platforms, including AI-generated or AI-driven bot content. The UK appears to be taking an innovation-forward approach to AI development, but there remains ambiguity regarding how the country will regulate this fast-evolving tech.
Concerns About the US Approach
A fragmented regulatory landscape
Critics argue that the US’s piecemeal approach to legislation will lead to inconsistencies and gaps in AI governance. The reliance on executive orders makes policies susceptible to changes with each administration, potentially leading to regulatory instability.
The absence of comprehensive federal laws could mean ambiguity regarding AI development and deployment standards. Unclear laws and guidance will likely result in increased breaches and enforcement action, placing a heavier burden on law enforcement services.
While state-level regulations help address specific concerns, the lack of unification could create a complicated legal situation for companies operating across multiple jurisdictions. This patchwork approach might hinder innovation and complicate compliance efforts.
Although voluntary industry standards can be powerful, they may suffer from inconsistency and the absence of enforcement. The standards’ scope may also be limited, failing to address societal factors like bias and discrimination. There’s a risk companies will claim they’re committed to responsible AI deployment for optics instead of actually implementing safe practices.
Public distrust
The US public sentiment reflects these concerns, as highlighted in a recent Pew Research Center survey from April 2025. The research revealed that a large proportion of Americans are wary of AI’s impact on society, with many expressing the desire for more robust regulatory frameworks. The survey, involving over 5,000 participants, shows widespread scepticism toward AI technology and its regulators.
Interestingly, the survey also involved over 1,000 AI experts, who are much more optimistic. Around 75% of experts consider AI will benefit them, in contrast to 25% of non-experts who were asked the same question.
However, one thing is certain: the majority of both groups believe users have insufficient control over how AI impacts their lives and that individuals need more control over AI use. They also have little confidence in the US government and private sectors to regulate responsible AI management.
Is the US Really Falling Behind?
Flexibility and innovation
Some believe the US is purposefully avoiding rigid frameworks to allow for rapid tech growth. Paul Scharre and Vivek Chilukuri, writing for Time, argue that “a uniquely American approach” is needed to promote innovation while managing risks and protecting users. They state that regimented standards could quickly become outdated in a dynamic field like AI.
Avoiding overregulation
Experts warn that overly restrictive federal regulations could result in strategic disadvantages, leading to other countries surpassing the US in tech innovation. Many advocate for balanced governance that encourages tech growth and evolution, which could align with the country’s current patchwork approach.
Supporting startups
The Chief Technical Officer of Qodek, Skander Nably, asserts that overregulation could limit startup participation in the tech industry. Strict federal laws could impose complex and costly compliance burdens on new companies and stunt innovation. Instead, there’s an argument for “smart, harmonized, and flexible rules” that promote growth while ensuring user safety.
Toward targeted legislation?
Institutions like the Center for Strategic and International Studies (CSIS) suggest that, although the US model may take longer to consolidate, it could produce a more adaptable framework that can adjust to ever-changing tech developments. However, they also highlight that such an approach could negatively impact the US’s competition globally within the AI industry.
Conclusion
The question of whether the US is falling behind in AI regulation is far from settled. On the one hand, its fragmented regulations – driven by executive orders, voluntary agreements, and state-led initiatives – raise concerns about legal instability, inadequate safeguards, and public mistrust. Compared to the more cohesive regulatory frameworks in the EU and China, the US falls behind in implementing a comprehensive national standard for AI deployment.
On the other hand, several experts argue that the US model is a deliberate strategy to encourage innovation. Flexible and less prescriptive policies may help avoid overregulation and keep pace with rapid technological developments. Proponents highlight that a lighter regulatory touch could boost startup participation in the market, support competitiveness, and result in more adaptive governance over time.
Ultimately, the US may not be falling behind so much as taking a different path, prioritising innovation and decentralisation. However, this alternative legislative route risks short-term fragmentation, which could result in long-term consequences. Whether the US’s approach will be effective depends on how well it can balance AI growth with trust, safety, and accountability.
Contact TechLab
Find out how TechLaB can help you reach your goals with our business-oriented, fast, innovative, multilingual yet detail oriented legal advice