SEERIST BIG PICTURE SERIES
Regulatory & Geopolitical Risks to Heavily Influence AI Business Landscape
Conceptual co-operation
Governments around the world share many of the same concerns when it comes to the transformative nature of the new innovations and applications under the AI umbrella. Calls for a co-operative approach on regulating new AI-linked technologies are frequent and likely to continue. The 24th EU-China summit in Beijing in December 2023 saw commitments to co-operate on digital technologies, after EU-China meetings in September had focused on AI.
Many countries seeking to be global leaders on AI have published and advocated for global alignment and co-operation. At its [3rd Belt and Road forum] in October 2023, China proposed a global framework AI governance initiative. The US, EU, UK, Australia and China in November 2023 signed the so-called Bletchley declaration at the UK’s AI summit, which recognises the potential dangers of AI and commits to ensuring safety in its use. The joint declaration made waves in an area of intensifying competition. The reality behind the superficial calls for co-operation is indeed more complex.
Regulatory reckoning?
To understand the extent to which AI will see more multilateral co-operation or competition, it is worth taking a closer look at the regulatory frameworks proposed or that have already been passed to date. The EU and China – both having draft AI laws likely to be introduced in 2024 (albeit in the case of the EU, not implemented fully until 2026) – have been among the most proactive jurisdictions in this area, and showcase some clear commonalities and differences.
Most jurisdictions with AI standards-setting ambitions have used similar catchphrases to outline their legislative objectives, including transparency, data privacy and security. Most draft and passed regulations, as well as less binding statements, call for enhanced protections for users’ data, preventing discrimination, and ensuring AI applications do not endanger people’s safety. There is a shared concern for the potential misuse of AI to undermine political stability and public trust in institutions.
Looking beyond the buzzwords, there are key differences between both jurisdictions’ emphasis of new rules. Both China and the EU are looking to expand disclosures relating to AI algorithms and the training data used to build them, but China requires the algorithms to be registered with authorities and undergo a security review before being made available to the public. In contrast, the extent of disclosure requirements for companies engaged in activities deemed high risk under the EU’s upcoming AI law are unclear, but are likely to be less stringent. The EU’s Digital Services Act (DSA), which came into effect in 2022, allows for the review of algorithms used by very large online platforms, categorised as those with over 45m users.
The potential for discrimination by AI models has come under media scrutiny globally, and concerns in this area have prompted a high number of local-level regulations in the US that restrict the use of AI and facial recognition, for instance by law enforcement. In contrast, China has proactively integrated AI technologies such as facial recognition and online media monitoring into law enforcement. Such developments remain more controversial, and therefore gradual, in the EU and US. The latest version of the EU’s AI draft law from December 2023 nevertheless includes exemptions for law enforcement purposes in exceptional circumstances like for biometric identification systems, marking a key turning point in this regard for the EU. A major objector, Germany, has since approved that version of the draft. In an increasingly tense global security environment, appeals to ensuring domestic security are likely to be increasingly effective in this regard.
The EU and China are both very proactive in regulating the digital sector, and both risk hampering growth as a result. Concerns in this regard may result in an eventual re-softening of regulatory approaches in the longer term as global competition in the sector intensifies. However, despite concerns from industries that the EU’s heavily regulated approach will stifle growth, steadily adjusted legislation will likely be the bloc’s main tool in its approach to the sector in the immediate term. The UK, by contrast, has proposed empowering relevant regulators, but there have been no legislative developments in this regard so far. The UK’s ambitions in the sector are currently likely to fall flat.
The common theme in all the regulatory proposals is that they lag far behind the pace of technology development, and are unlikely to catch up (with the EU’s AI act not entering fully into effect until 2026). Instead, regulators are likely to retain considerable ambiguity in regulation, which, particularly in China, will allow authorities wider powers of enforcement where deemed necessary. Businesses, particularly in Europe and the US, have warned that longer-term regulatory uncertainty will undermine investment into the industry.
Generative geopolitics
If the impact of nascent regulation on AI development remains uncertain, the likelihood and severity of geopolitical competition on AI and related industries is less ambiguous. Many governments have raised concerns that AI will amplify national security threats and undermine social and political stability. The national securitisation affecting international trade and technology will hence also encumber AI-related businesses. In the longer term, there is potential for the emergent ability of AI to draw strategic and national security-relevant conclusions from the rapid analysis of large amounts of data, resulting in greater protection of the transfer and storage of even seemingly innocuous data sets.
In the near term, the key area to watch will be the development of future trade restrictions affecting AI-related companies, services and components, as well as other restrictions on market access for foreign companies. Chinese companies currently represent the largest customers of many large technology platforms offering AI-related applications, in part because of worries that new US restrictions will prevent access to the most advanced products and services later in 2024. This is happening in parallel to expanded localisation and self-reliance policies in China to boost resilience to new restrictions. Further US export controls related to AI are likely, including denial of access of Chinese companies to cloud platforms allowing access to advanced AI models, as well as to semiconductors and other hardware technologies that would help China construct its own AI ecosystem.
Against this backdrop, there is significant potential for the EU and the UK to follow the US’ lead on eventual regulatory moves on AI. AI has been a [major focus] for the US-EU Trade and Technology Council meetings, and is likely to feature again in discussions in 2024. An October 2023 executive order regulating AI in the US contained a number of commonalities with the EU’s approach, including requirements for watermarking AI-produced content. This came amid concerns over its potential use in manipulating public opinion, as well as requirements around safety and national security. In terms of trade restrictions, however, the EU is likely to remain more cautious than the US, given its lower appetite for tit-for-tat retaliation with China.
The more geopolitical tensions increase, the more siloed digital ecosystems and AI development are likely to become. In the event that former president Donald Trump (2017-21), who currently leads the Republican race for presidential nominee, returns to office as president in 2025, localisation of technology could further intensify. The expanding requirements for localisation of hardware, software, and data storage in some cases, present zero-sum choices for businesses trying to be compliant with multiple regulatory regimes. The high costs of such compliance, and localisation more generally, will particularly affect smaller and newer AI-related companies, undermining their ability to participate in new markets and stifling cross-border innovation. This, far more than the frequency of calls for global AI co-operation, is more likely to influence the type of impact AI development has on the world.
Sources:
“EU AI Act: first regulation on artificial intelligence”, europarl.europa.eu, “The General Office of the State Council on printing and disseminating Notice of the State Council's 2023 legislative work plan”, gov.cn, “EU-China: Commission and China hold second High-level Digital Dialogue”, ec.europa.eu, "Trump vows to cancel Biden executive order on AI to protect free speech", washingtonexaminder.com, Control Risks
In this note, we take a closer look at the more ambitious regulatory initiatives in China and the EU.
Conceptual co-operation
Governments around the world share many of the same concerns when it comes to the transformative nature of the new innovations and applications under the AI umbrella. Calls for a co-operative approach on regulating new AI-linked technologies are frequent and likely to continue. The 24th EU-China summit in Beijing in December 2023 saw commitments to co-operate on digital technologies, after EU-China meetings in September had focused on AI.
Many countries seeking to be global leaders on AI have published and advocated for global alignment and co-operation. At its [3rd Belt and Road forum] in October 2023, China proposed a global framework AI governance initiative. The US, EU, UK, Australia and China in November 2023 signed the so-called Bletchley declaration at the UK’s AI summit, which recognises the potential dangers of AI and commits to ensuring safety in its use. The joint declaration made waves in an area of intensifying competition. The reality behind the superficial calls for co-operation is indeed more complex.
Regulatory reckoning?
To understand the extent to which AI will see more multilateral co-operation or competition, it is worth taking a closer look at the regulatory frameworks proposed or that have already been passed to date. The EU and China – both having draft AI laws likely to be introduced in 2024 (albeit in the case of the EU, not implemented fully until 2026) – have been among the most proactive jurisdictions in this area, and showcase some clear commonalities and differences.
Most jurisdictions with AI standards-setting ambitions have used similar catchphrases to outline their legislative objectives, including transparency, data privacy and security. Most draft and passed regulations, as well as less binding statements, call for enhanced protections for users’ data, preventing discrimination, and ensuring AI applications do not endanger people’s safety. There is a shared concern for the potential misuse of AI to undermine political stability and public trust in institutions.
Looking beyond the buzzwords, there are key differences between both jurisdictions’ emphasis of new rules. Both China and the EU are looking to expand disclosures relating to AI algorithms and the training data used to build them, but China requires the algorithms to be registered with authorities and undergo a security review before being made available to the public. In contrast, the extent of disclosure requirements for companies engaged in activities deemed high risk under the EU’s upcoming AI law are unclear, but are likely to be less stringent. The EU’s Digital Services Act (DSA), which came into effect in 2022, allows for the review of algorithms used by very large online platforms, categorised as those with over 45m users.
The potential for discrimination by AI models has come under media scrutiny globally, and concerns in this area have prompted a high number of local-level regulations in the US that restrict the use of AI and facial recognition, for instance by law enforcement. In contrast, China has proactively integrated AI technologies such as facial recognition and online media monitoring into law enforcement. Such developments remain more controversial, and therefore gradual, in the EU and US. The latest version of the EU’s AI draft law from December 2023 nevertheless includes exemptions for law enforcement purposes in exceptional circumstances like for biometric identification systems, marking a key turning point in this regard for the EU. A major objector, Germany, has since approved that version of the draft. In an increasingly tense global security environment, appeals to ensuring domestic security are likely to be increasingly effective in this regard.
The EU and China are both very proactive in regulating the digital sector, and both risk hampering growth as a result. Concerns in this regard may result in an eventual re-softening of regulatory approaches in the longer term as global competition in the sector intensifies. However, despite concerns from industries that the EU’s heavily regulated approach will stifle growth, steadily adjusted legislation will likely be the bloc’s main tool in its approach to the sector in the immediate term. The UK, by contrast, has proposed empowering relevant regulators, but there have been no legislative developments in this regard so far. The UK’s ambitions in the sector are currently likely to fall flat.
The common theme in all the regulatory proposals is that they lag far behind the pace of technology development, and are unlikely to catch up (with the EU’s AI act not entering fully into effect until 2026). Instead, regulators are likely to retain considerable ambiguity in regulation, which, particularly in China, will allow authorities wider powers of enforcement where deemed necessary. Businesses, particularly in Europe and the US, have warned that longer-term regulatory uncertainty will undermine investment into the industry.
Generative geopolitics
If the impact of nascent regulation on AI development remains uncertain, the likelihood and severity of geopolitical competition on AI and related industries is less ambiguous. Many governments have raised concerns that AI will amplify national security threats and undermine social and political stability. The national securitisation affecting international trade and technology will hence also encumber AI-related businesses. In the longer term, there is potential for the emergent ability of AI to draw strategic and national security-relevant conclusions from the rapid analysis of large amounts of data, resulting in greater protection of the transfer and storage of even seemingly innocuous data sets.
In the near term, the key area to watch will be the development of future trade restrictions affecting AI-related companies, services and components, as well as other restrictions on market access for foreign companies. Chinese companies currently represent the largest customers of many large technology platforms offering AI-related applications, in part because of worries that new US restrictions will prevent access to the most advanced products and services later in 2024. This is happening in parallel to expanded localisation and self-reliance policies in China to boost resilience to new restrictions. Further US export controls related to AI are likely, including denial of access of Chinese companies to cloud platforms allowing access to advanced AI models, as well as to semiconductors and other hardware technologies that would help China construct its own AI ecosystem.
Against this backdrop, there is significant potential for the EU and the UK to follow the US’ lead on eventual regulatory moves on AI. AI has been a [major focus] for the US-EU Trade and Technology Council meetings, and is likely to feature again in discussions in 2024. An October 2023 executive order regulating AI in the US contained a number of commonalities with the EU’s approach, including requirements for watermarking AI-produced content. This came amid concerns over its potential use in manipulating public opinion, as well as requirements around safety and national security. In terms of trade restrictions, however, the EU is likely to remain more cautious than the US, given its lower appetite for tit-for-tat retaliation with China.
The more geopolitical tensions increase, the more siloed digital ecosystems and AI development are likely to become. In the event that former president Donald Trump (2017-21), who currently leads the Republican race for presidential nominee, returns to office as president in 2025, localisation of technology could further intensify. The expanding requirements for localisation of hardware, software, and data storage in some cases, present zero-sum choices for businesses trying to be compliant with multiple regulatory regimes. The high costs of such compliance, and localisation more generally, will particularly affect smaller and newer AI-related companies, undermining their ability to participate in new markets and stifling cross-border innovation. This, far more than the frequency of calls for global AI co-operation, is more likely to influence the type of impact AI development has on the world.
Explore Seerist Today
See how Seerist can support your security operations and intelligence analysis efforts.
Request a demo
2024 will see calls to expand both cross-country co-operation and restrictions on artificial intelligence (AI) technologies.
As regulators globally seek to keep pace with the potentially transformative effect of AI, technological developments are likely to move faster, resulting in a fluctuating regulatory environment with the regular appearance of grey areas.
Priorities, concerns and maturity in thinking differ across key jurisdictions, notably between the EU, China and the US, further complicating companies’ efforts to stay ahead of regulatory developments.
Geopolitical competition and related concerns around AI’s impact on national security will limit co-operation on global standards. This will likely result in greater localisation and the siloing of different systems between key jurisdictions.
Businesses will need to look out for geopolitically driven restrictions as well as conflation with regulatory debates favouring different AI business models that will affect access restrictions to AI platforms.