1
Connecting the next wave of IoT Devices
2
Changing the face of the IoT ecosystem
3
Enterprises turn to IoT for sustainability and profits
4
AI at the IoT edge
5
Operationalizing AI
6
AI Vanishes to go Mainstream
7
AI Growing Up and Learning Accountability
8
Merging Tech
9
Quantum computing moving beyond the hype
READ MORE
Across Informa Tech's Applied Intelligence Group, Analysts and Editors from Omdia, AI Business, IoT World Today, and Enter Quantum have come together in this eBook, to give their predictions on the tech trends that are shaping the digital transformation landscape in 2022 and beyond.
Beyond ChatGPT: Responsible Generative AI for Business Intelligence
Applied Intelligence Group
As we reach the halfway point of 2022, we’re now living in a world where global developments mean we rely on the power of transformative technologies such as AI, IoT, and Quantum, to provide solutions more than ever. The adoption and continued progression of these technologies are essential across a wide range of industries and geographies, that need support to overcome challenges faced. The implementation and acceleration of transformative technology means projects can move forward at pace, and this is where we will really start to see a change in the current status quo. Individually both AI and IoT are valuable, but the power of these technologies when they come together is where we will continue to see the biggest impact that will make a real difference. Their confluence will turn what were simple solutions into complex, truly impactful offerings that will alter society and businesses for years to come. Individually both AI and IoT are valuable, but the power of these technologies when they come together is where we will continue to see the biggest impact that will make a real difference. Their confluence will turn what were simple solutions into complex, truly impactful offerings that will alter society and businesses for years to come. It's the continued advancements of each of these technologies that will enhance the other. Omdia, our technology research brand, estimates that by 2030 there will be 75 billion IoT devices around the globe, which will provide an unimaginable amount of data to strengthen AI systems. At the same time, AI allows this vast amount of data to be analyzed and acted upon with unprecedented speed. As we move forward, organizations will be looking at how they can gain competitive advantages through leveraging quantum computing to move beyond the limitations that traditional architecture presents; and increasing investment in AI-driven automation. Alongside this, also ensuring they converge Subject Matter Experts with Data Scientists and Engineers throughout various levels of the process. Whilst there is significant momentum in the space, Omdia predicts there are areas that still need to be addressed across 2022 and beyond. These include interoperability with IT existing systems, industry-wide standardization of AI measurement and success; implementing standardized regulation and governance; and a greater drive to improve diversity within the sector.
Jenalea Howell Vice President, Applied Intelligence Group at Informa Tech
Brought to you by
Introduction Current market landscape Generative AI timeline Generative AI risks and challenges AI Legislation
Contents
It’s fair to say that Artificial Intelligence (AI) has made an impact on the public consciousness in 2023 like no other technological development which has ever gone before. Technologies such as the first smartphones captured people’s imagination (and wallets) but never has there been so much debate about the impact AI will have on the way we live our lives - from the way we study to the way we work and interact with both one another and with our devices. Of particular interest right now is the rise of generative AI services. These are applications that create new content, such as text, images, or other media, based on a set of instructions or prompts. They work by learning the patterns and structure of existing data, then using this knowledge to generate new data that has similar characteristics. Many industry commentators expect 2023 to be a pivotal year for generative AI with new applications of the technology being developed all the time. However, with innovation comes responsibility. Organizations looking to generative AI will be presented with great opportunities, but they will need to navigate their way through numerous risks, including reputational, financial and legal. Understanding current market drivers within the emerging landscape, potential use cases and the growing yet complex legal framework will be key for businesses if they hope to take full advantage of everything generative AI has to offer over the coming years.
Introduction
One key area to support this movement for change is focused on increasing the presence and eminence of women in the field of AI. Organisations must first really understand what can be done to close the gender gap in various areas (and increase their understanding of why there is a gap) and create actionable programs to accomplish this.
This report not only gives a voice to the underrepresented and unsung heroines in the field of artificial intelligence, but it also provides a framework and direction for businesses, institutions, agencies, governmental organisations and future generations of the evolving society to build a foundation of best practices when it comes to shaping the future, our future.
Responsible AI: Finding the balance between innovation and legislation
Back
Next
Given the enormous social, economic, and ethical implications of GAI (generative AI), it is no exaggeration to say that we could be witnessing the next transformational technology, as or more disruptive than the origin of the steam engine, car, or internet.
The Generative AI Revolution: Understanding, Innovating and Capitalizing report, Omdia.
"
Today the best-known examples of generative AI applications are large language models (LLMs) like OpenAI’s ChatGPT and Google’s Bard. Typically, these are trained using a technique known as deep learning, a type of machine learning that uses artificial neural networks to learn from data. This can include books, articles, code, and other forms of text. However, generative AI applications aren’t just restricted to text, with generative AI now being used for image and video generation too. For example, AI software program Midjourney was recently used to created fake images of former US president Donald Trump’s arrest as well as the images of the 86-year-old Pope Francis in a puffa jacket, both of which were shared widely on social media.
Current market landscape
Omdia, a leading research group, and global media portal AI Business at Informa Tech, are partnering with Women in AI, a non-profit community of 13,000 members, 200 volunteers and 150 countries whose mission is to empower women and minorities to become AI and data experts, innovators and leaders in the advancement of ethical applications and the responsible use of AI.
Although generative AI was first introduced in chatbots in the 1960s, it wasn’t until 2014 with the introduction of the generative adversarial networks or GANs - a type of machine learning algorithm - that generative AI could create convincingly authentic images, videos and audio of real people.
Rapid generative AI expansion
McKinsey claims that generative AI has the potential to revolutionize the entire customer operations function, improving customer experience as well as agent productivity. Increasingly, generative AI is also being deployed by sales and marketing teams to create personalized messages for customers as well as first drafts of brand advertising, social media posts and product descriptions.
Contact us
SCROLL
DALL-E2 is an AI image generator gaining in popularity. Created by OpenAI, it is now integrated into several platforms including Microsoft’s AI chatbot BingChat where it’s called the Bing image generator. Users enter text descriptions into the system and the software spits out realistic, original images based on images found on the internet along with their patterns and captions. While the original Bing AI release was originally missing a few guardrails, Microsoft has now fixed this. “We have put controls in place that aim to limit the generation of harmful or unsafe images. When our system detects that a potentially harmful image could be generated by a prompt, it blocks the prompt and warns the user.” Other specialist image AI image generators include Waifu Labs which uses AI to generate custom Anime portraits and Deep Dream Generator’s AI software where users can create over-processed dream like images, either from scratch or from a base image they want to enhance.
75%
Although inevitably it is consumer applications of generative AI programs that have grabbed the headlines - such as their widespread use in student essays or in the creation of ‘deepfake’ images - undoubtedly it is the business world where the technology will have the greatest impact.
“Improving customer operations"
of the total annual value from generative AI use case
That the use of generative AI is growing rapidly is something of an understatement.
That the use of generative AI is growing rapidly is something of an understatement. In a survey from October 2021 when generative AI accounted for just 1% of all data, Gartner predicted that by 2025 generative AI will be producing 10% of all data, accounting for 20% of all test data in consumer-facing use cases. It predicted, that by 2027, 30% of manufacturers will use generative AI to enhance their product development effectiveness.
Current Market Landscape
10% of data produced by generative AI by 2025
Chatbots, digital engagement tools, and interactive self-help solutions are likely to be among the first wave of use cases to make use of GAI’s (generative AI) natural language interface. GAI’s ability to build on a response in ways that mirror human conversation can potentially give automated customer service interactions more flexibility.
One company which has recently embraced the technology is cloud-based software giant Salesforce which announced a generative AI for CRM (Customer Relationship Management) in March. Called Einstein GPT, the system will be able to create emails, craft customer service responses and even generate code. It will also make Einstein GPT compatible with collaborative business tool Slack which it owns.
“B2B selling is moving from intuition-based to data-driven and from episodic sales stages to connected customer journeys powered by generative AI,” says Ketan Karkhanis, Salesforce’s executive vice president and general manager of Sales Cloud. “Whether it’s autonomous prospecting, surfacing real-time insights when you need them, or automating time-consuming tasks, AI is the new UI for every seller.”
Software engineering
Customer operations and sales/marketing are not the only areas where generative AI is expected to have a huge impact. By treating computer languages like other forms of language you open a range of possibilities for software engineering. For generative AI can be used to generate code such as HTML, CSS and Javascript as well as identify and fix code by recognizing and analyzing any patterns that are associated with bugs. One cloud-based generative AI tool that is helping software engineers to write code is GitHub Copilot. Powered by OpenAI’s Codex language model, it is trained on a massive dataset of text and code. Meanwhile, other generative AI tools are being released, including Microsoft IntelliCode, Alibaba Cloud Cosy, AIXcoder and TabNine. By automating tasks and improving the quality of code, it is hoped these tools will help software engineers to become more productive and innovative.
For generative AI can be used to generate code such as HTML, CSS and Javascript as well as identify and fix code by recognising and analysing any patterns that are associated with bugs.
One cloud-based generative AI tool that is helping software engineers to write code is GitHub Copilot. Powered by OpenAI’s Codex language model, it is trained on a massive dataset of text and code. Meanwhile, other generative AI tools are being released, including Microsoft IntelliCode, Alibaba Cloud Cosy, AIXcoder and TabNine.
By automating tasks and improving the quality of code, it is hoped these tools will help software engineers to become more productive and innovative. “Looking ahead, GAI-powered coding could (also) radically democratize software development by offering non-technical users a path to bring new products to life,” add the authors of the Omdia generative AI report.
According to McKinsey’s analysis, the direct impact of AI on the productivity of software engineers could range between 20% to 45% of current spending on the function. This value arises from reducing the time spent on tasks such as generating initial code drafts, code correction and refactoring. Another study found that software developers using GitHub’s Copilot completed the task 56% faster than those not using the tool.
According to McKinsey, generative AI could have an impact on most business functions, and identified four areas that could account for 75% of the total annual value from generative AI use cases:
Finally, one area where generative AI could have a massive social impact is in the healthcare and pharmaceutical industries. Here are just a few areas where the technology is already proving transformational: Drug safety: By training generative AI models on data about adverse drug reactions, the technology can help to improve the safety of drugs and healthcare treatments. For example, generative AI can be used to identify potential safety risks before they are introduced into clinical trials. Personalized medicine: Generative AI can be used to create personalized treatment plans for each patient. This can be done by analyzing a patient's genetic data, medical history and other factors to identify the treatments that are most likely to be effective for them. Improved healthcare diagnostics: Generative AI can be used to develop new health diagnostic tools which are both more accurate and faster than traditional methods. This can help to improve the early detection and treatment of diseases. For example, Addenbrooke’s Hospital in Cambridge, England recently announced it is working Microsoft to help doctors calculate where to direct the therapeutic radiation beams to kill cancerous cells while sparing as many healthy ones as possible. The same InnerEye technology also uses generative AI to analyze medical scans to detect abnormalities, diagnose diseases and recommend treatment plans for cancer patients. Overall, generative AI is allowing pharma companies to bring new medicines to market in much shorter timeframes, enhancing worker productivity and improving overall performance. Concludes Omdia’s recent Generative AI report. “Proponents believe GAI will significantly lessen the time and costs involved in drug research, protein sequencing, and novel chemical compound discovery.”
It’s a view shared by Bloomberg Intelligence (BI) whose latest 2023 Generative AI Growth report states the ‘generative AI market is poised to explode’, growing to $1.3 trillion over the next 10 years from a market size of just $40 billion in 2022. Initially driven by AI programs such as ChatGPT, growth could expand at a CAGR (Combined Annual Growth Rate) of 42%, BI estimates. It claims that generative AI will expand its impact from less than 1% of total IT hardware, software services, ad spending, and gaming market spending to 10% by 2032.
The largest drivers of incremental revenue will be generative AI infrastructure as a service ($247 billion by 2032) used for training LLMs, followed by digital ads driven by the technology ($192 billion) and specialized generative AI assistant software ($89 billion). Meanwhile McKinsey’s latest research, The economic potential of generative AI: The next productivity frontier (June 2023) estimates that generative AI could add the equivalent of $2.6 trillion to $4.4 trillion across the 63 use cases it examined. By comparison the UK’s entire GDP in 2021 was $3.1 trillion.
Customer operations
Marketing and sales
Research and development
Omdia Report: The Generative AI Revolution: Understanding, Innovating and Capitalizing
Looking ahead, GAI-powered coding could (also) radically democratize software development by offering non-technical users a path to bring new products to life.
According to McKinsey’s analysis, the direct impact of AI on the productivity of software engineers could range between 20% - 45% of current spending on the function. This value arises from reducing the time spent on tasks such as generating initial code drafts, code correction and refactoring. Another study found that software developers using GitHub’s Copilot completed the task 56% faster than those not using the tool.
Download now
Healthcare
New generative AI applications
Back to top
Generative AI Timeline
2014
Creation of generative adversarial networks (GANs) deep learning architecture
2016
OpenAI publishes research on generative model. DeepMind’s Wavenet uses generative audio for creating human-like speech
2017
Transformer network enables advances in generative models
2019
OpenAI releases Chat GPT-2
2021
January
Open AI releases Dall-E, a deep learning model to create digital images
April
The EU AI Act is first proposed
2022
July
Image generation platform Midjourney launches
OpenAI releases ChatGPT 3.5
2023
March
March 14 – ChatGPT-4 launches March 21 – Google launches GPT-4 competitor, Bard March 31 – Italy bans ChatGPT for collecting personal data and lacking age verification
German tabloid publishes a fake AI-generated interview with Michael Schumacher
May
Google announces new LLM called PaLM2 that will power Bard
June
June 14th, 2023 – EU AI Act formally agreed
The rapid adoption of the technology is also extremely controversial for several reasons. These include the huge impact on the workforce and society caused by many jobs being automated, the potential for discrimination in how the technology is implemented and the risk of bad actors spreading misinformation unchecked through Generative AI applications. That’s not to mention the challenges around IP (Intellectual Property), copyright and privacy which will inevitably result from using existing data (including text, audio and video) to generate new data. Even several senior industry commentators have expressed concern over AI, including Generative AI, over the last few months. In May, the Center for AI Safety - comprising leaders from ChatGPT, the head of Google’s AI lab, and CEO of Anthropic - all signed an open letter stating: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
Generative AI risks and challenges
“
Right now, what we're seeing is things like GPT-4 eclipse a person in the amount of general knowledge it has…In terms of reasoning, it's not as good, but it does already do simple reasoning. And given the rate of progress, we expect things to get better quite fast. So we need to worry about that.
Dr Geoffrey Hinton, former Google employee
Undoubtedly the biggest challenge of AI, including generative AI, is managing its impact on the workforce. AI has often been referred to as the catalyst for the next industrial revolution (or Industry 4.0 as it is often referred to), automating many of the jobs that are currently carried out by people.
4%
7%
potential increase of Global GDP by Generative AI
Research published by Goldman Sachs in March suggested that Generative AI could increase Global GDP by up to 7% while warning that 300 million full-time jobs could be lost to automation over the coming years. And although not all automated work will result in layoffs, Omdia cited in its recent report, The Generative AI Revolution: Understanding, Innovating and Capitalizing, that Generative AI’s ‘ability to help with routine task automation, content creation, and customer service assistance have..sparked fears of worker displacement and unemployment.’ The report continues: “Unlike with robotic automation, moreover, job displacement is likely to cut across both blue and white-collar work.”
Indeed, it’s a view recently shared by the Organisation for Economic Co-operation and Development (OECD) in its recently published 2023 Economic Outlook: Artificial Intelligence and Jobs Report. In this report, it claims the occupations at highest risk from AI-driven automation are highly skilled jobs, representing around 27% of employment across 38 member countries including the UK, Japan, Germany, the US, Australia and Canada.
Impact on workforce
27%
of employment roles are highly skilled jobs, and are at the highest risk from aI-driven automation
As we have seen from the previous chapter, generative AI has the potential to revolutionize many industries. The technology can help organizations become much more efficient, create new business models and even help in the discovery of life-saving medicines.
The body said “occupations in finance, medicine and legal activities which often require many years of education, and whose core functions rely on accumulated experience to reach decisions, may suddenly find themselves at risk of automation from AI.” Furthermore, it continues, AI breakthroughs have already resulted in cases where output from AI tools was indistinguishable from that of humans, although for the time being AI was ‘changing jobs rather than replacing them’. Ian Hogarth, the head of the UK Government’s new AI taskforce also recently admitted ‘it was inevitable’ that more jobs would becoming increasingly automated, warning that that the whole world will have to rethink the way in which people work. "There will be winners or losers on a global basis in terms of where the jobs are as a result of AI," he told the BBC. Sounding a more positive note, however, Goldman Sachs points out, jobs ‘displaced by automation have been offset by the creation of new jobs,’ with around 60% of today’s workers employed in occupations that didn’t exist in 1940. As a result, it is difficult to predict with any accuracy how many jobs will be lost by AI and how many will be gained.
Generative AI bias
Generative AI systems are inherently at risk from bias. That’s because the foundational models upon which they are built are only as good as the text, images and other datasets which are used to train them. Inevitably, datasets can reflect either unconscious (or sometimes conscious) prejudices of the developers, many of whom are male (only 22% of professionals in the AI and data science fields are women, according to the Alan Turing Institute). Nor is it just gender bias. Often there is a race bias too. For example, the landmark Gender Shades project found that datasets comprising mostly white and male faces resulted in much lower levels of facial recognition among women, especially those of color. The OECD has also outlined risks associated with AI’s growing influence over the workplace. Those include AI tools making hiring decisions, with the risk of falling foul of biased AI-driven decisions much “greater for some socio-demographic groups who are often disadvantaged in the labour market already”.
For example, in 2018 Amazon scrapped a secret AI recruiting tool that was found to be biased against women because it used patterns detected in CVs from ten years earlier when men dominated the tech industry even more than they do today. Similarly, in 2019, Apple’s credit card ran into trouble when users noticed it offered lower credit levels to women than men. Moreover, this potential to ‘unknowingly reinforce erroneous or potentially harmful data, is complicated by the lack of explainability within GAI (Generative AI) systems’, claims Omdia.
It is currently impossible to trace which sources a GAI system has used to inform its answers or content, and researchers are not entirely sure even how a GAI system comes to the conclusions it returns.
UK money saving expert Martin Lewis recently said he was left ‘feeling sick’ by an online scam video featuring a realistic computer generation of him supposedly promoting an investment scheme. Lewis, who sued Facebook in 2018 over fake ads using his name, told the BBC that the ‘technology is improving at rapid speed’ and called for the government to press ahead with AI legislation. “We are scared of big tech in this country and we need to start regulating them properly,” he added.
Risk of misinformation
As generative AI technology continues to develop, it is likely that we will see even more sophisticated methods of spreading misinformation emerge. As the recent Omdia report states: “Many commentators have pointed to generative AI as a tool that could be used for fraud and abuse at scale.” Already we have noted several examples of high-profile ‘deepfake’ images generated by AI (see previous chapter). In addition, we are seeing generative AI being used to create fake social media posts which could be used to influence political opinion ahead of US and UK elections as well as deepfake videos which are difficult to distinguish from real footage.
Copyright infringement and IP challenges
Essentially, generative AI works by creating something new from something old. This means there is always the potential for a breach of copyright if the artist, writer or even coder thinks their original work was used to create the computer-generated version.
Much depends on the training data that is used to set up the generative AI system in the first place. If this contains copyrighted material, then there is a good chance the generative AI model could infringe on the original copyright. The EU AI Act (see next chapter) proposes that providers of generative AI make a summary publicly available of any copyrighted material used for training their systems, including information about the identity of the copyright owners, the nature of the copyrighted material and the extent to which it was used.
US comedian and author Sarah Silverman recently announced she is suing ChatGPT creator OpenAI and Mark Zuckerberg’s Meta, along with two other writers, over claims that their models were trained on her work without permission. The suit against OpenAI claims that the three authors ‘did not consent to the use of their copyrighted books as training material for ChatGPT’. Meanwhile, the lawsuit against Meta claims that ‘many’ of the authors’ copyrighted books appear in the datasets that the company used to train its LLaMA AI models.
This followed the resignation of Dr Geoffrey Hinton from his role at Google earlier in the same month. Widely regarded as the ‘godfather of AI’ for his pioneering work in deep learning and neural networks, Hinton specifically cited concerns over AI chatbots which he described as ‘quite scary’. Below we look at just some of the risks and challenges of implementing generative AI solutions.
Inevitably the rapid rise of generative AI has caused widespread concerns among lawmakers keen to ensure the technology is deployed responsibly. As a result, governments across the world are now working on legislation to control how the technology can be deployed to best protect its citizens at the same time as providing genuine innovation.
AI legislation and the EU AI Act
Requirements
Transparency: Organizations must be able to understand how their generative AI system works and how it generates content. This means that the developers of generative AI systems will need to provide information about the foundational models that underpin their systems. The legislation will require services like ChatGPT to register the sources of all the data used to “train” the machine as well as the algorithms used to generate content. To combat the high risk of copyright infringement, the EU AI Act will also oblige developers to publish all the works of scientists, musicians, illustrators, photographers and journalists used to train them.
Accountability: The developers and operators of generative AI systems must be able to explain how their systems work and why they generate certain content. For example, developers will need to have a clear understanding of the biases and limitations of their systems.
Robustness: Generative AI systems must be robust and reliable and must be able to withstand attack or manipulation. This means that developers will need to take steps to ensure that their systems are protected from a cyber-attack.
Diversity: The development and use of generative AI systems must be inclusive and fair, and not discriminate against any group of people. This means that developers will need to be aware of the potential for bias in their systems and will need to take steps to mitigate these risks.
In addition to these requirements, generative AI applications may also be affected by other provisions within the EU AI Act, such as the ban on unacceptable risk AI systems. For example, a generative AI system that is used to generate deepfake images or video could under certain circumstances be considered an unacceptable risk AI system – categorised by the EU as a system that’s a threat to the ‘safety, livelihoods and rights of people’. This unacceptable category also includes biometric categorization systems using sensitive characteristics such as gender, race or ethnicity as well as the indiscriminate scraping of biometric data from social media or CCTV to create facial recognition databases.
Negative industry response
EU AI Act
Unacceptable risks
Promoting responsible AI
While it is too early to say precisely what steps businesses trading in Europe need to take in order to comply with the forthcoming EU AI Act, it is worth organizations putting systems in place ahead of time to ensure that generative AI systems are being used in an ethical and responsible way.
That’s because under the draft legislation fines for non-compliance with the prohibition of certain AI practices have been raised to EUR 40 million or 7% of the total worldwide annual turnover of the offender, whichever is higher.
That’s because under the draft legislation fines for non-compliance with the prohibition of certain AI practices have been raised to EUR 40 million or 7% of the total worldwide annual turnover of the offender, whichever is higher. Speaking to AI Business recently, Clare Walsh, Director of Education at the Institute of Analytics talked about the need for organizations to act responsibly in rolling out generative AI solutions.
My advice to SMEs is to assume that the law will eventually catch up with negative practices. The information commissioner is very clear there will be no place to hide from reckless innovation.
Indeed, Microsoft – one of the biggest investors in OpenAI - has recently produced its own report, Governing AI: A Blueprint for the Future (May 2023) about the need for responsible AI. In the report there are several recommendations including the creation of new government-led AI frameworks and implementing “effective safety brakes” for AI systems that control critical national infrastructure. Microsoft Vice Chair and President Brad Smith also suggested the creation of a “new government agency” to develop new laws and regulations for AI foundation models. “We need to think early on and in a clear-eyed way about the problems that could lie ahead,” Smith said. “As technology moves forward, it’s just as important to ensure proper control over AI as it is to pursue its benefits.”
Microsoft Vice Chair and President Brad Smith also suggested the creation of a “new government agency” to develop new laws and regulations for AI foundation models. “We need to think early on and in a clear-eyed way about the problems that could lie ahead,” Smith said. “As technology moves forward, it’s just as important to ensure proper control over AI as it is to pursue its benefits.” He added:
We are committed and determined as a company to develop and deploy AI in a safe and responsible way. We also recognise, however, that the guardrails needed for AI require a broadly shared sense of responsibility and should not be left to technology companies alone.”
However, while organizations have generally recognized the need for greater controls over AI applications, industry response to the tight restrictions in the EU AI Act have been far from positive. Produced in July 2021 a few months after the EU first proposed its AI Act, a report by the Center for Data Innovation warned that the Artificial Intelligence Act will cost the European economy €31 billion over the next five years and reduce AI investments by almost 20 percent.
A European SME that deploys a high-risk AI system will incur compliance costs of up to €400,000 which would cause profits to decline by 40 percent.
Then at the end of June 2023, a few weeks after the first draft of the EU AI Act was agreed, some of the biggest companies in Europe also took collective action to criticize the act claiming that it was ineffective and could negatively impact competition. In an open letter sent to the European Parliament, over 150 executives from companies including Renault, Heineken, Airbus, and Siemens stated the following:
As engaged stakeholders of the European economic sector, we would like to express our serious concerns about the proposed EU Artificial Intelligence (AI) Act. In our assessment, the draft legislation would jeopardise Europe's competitiveness and technological sovereignty without effectively tackling the challenges we are and will be facing. This is especially true regarding generative AI. Under the version recently adopted by the European Parliament, foundation models, regardless of their use cases, would be heavily regulated, and companies developing and implementing such systems would face disproportionate compliance costs and disproportionate liability risks.
How should businesses prepare for the EU Act?
Nevertheless, despite reservations from some commentators, growing regulation is coming to generative AI industry over the next few years. So what steps can organizations take to prepare? According to London-based law firm Mayer Brown, practical steps that organizations might take to prepare for AI legislation both in Europe and the rest of the world include the following:
Critically assess data governance practices for training of AI models in view of the potential requirements of the draft EU AI Act (as well as other global frameworks in the making).
Prepare the support documentation of the relevant AI system in line with potential conformity, documentation, data governance and design obligations.
Assess processes in place for reporting incidents related to AI systems.
Consider existing legislation. There are already laws in place which govern some aspect of AI. These include human rights laws in place dealing with discrimination as well as article 22 of GDPR which limits the circumstances in which AI can make automated decisions about an individual.
According to current GDPR legislation, “the data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her.”
First proposed by the European Commission on 21 April 2021, the EU AI Act is the world’s first piece of dedicated AI legislation, although others are set to follow over the coming years (see Global AI legislation). Formally approved by the EU on June 14th, 2023, the act is expected to pass a final draft by the end of 2023 though most likely won’t come into force until 2025. The act divides AI into four categories according to risk: minimal risk, limited risk, high risk and unacceptable risk. As part of the recently updated draft legislation, generative AI tools like OpenAI’s ChatGPT and Google’s Bard will now come under particularly scrutiny. Regarded as a type of high-risk AI system (although not formally categorised in one of the tiers under the new article 28B), Generative AI systems will need to be submitted for review before their commercial release. They will also be subject to several requirements outlined below.
First proposed by the European Commission on 21 April 2021, the EU AI Act is the world’s first piece of dedicated AI legislation, although others are set to follow over the coming years (see Global AI legislation). Formally approved by the EU on June 14th, 2023, the act is expected to pass a final draft by the end of 2023 though most likely won’t come into force until 2025.
Omdia Report: The Generative AI Revolution: Understanding, Innovating and Capitalizing"
Global AI legislation
While undoubtedly the EU is leading the way when it comes to widespread AI legislation, other regions are following suit, generally with more targeted interventions either to promote the use of AI or to control its use.
For example, with the emergence of generative AI services in the People’s Republic of China (PRC) that are similar to ChatGPT, the Cyberspace Administration of China (CAC) issued draft Measures for the Management of Generative Artificial Intelligence Services in April 2023. These Draft Measures set out proposed rules for the regulation of use of Generative AI in the PRC, with the consultation period having ended on 10 May 2023.
These Draft Measures are issued in accordance with PRC Cybersecurity Law, the Personal Information Protection Law (PIPL) and the Data Security Law, and follow other pieces of legislation that aim to regulate the use of AI in China – namely the “Internet Information Service Algorithmic Recommendation Management Provisions” (Algorithmic Recommendation Provisions) and the “Provisions on the Administration of Deep Synthesis Internet Information Services”. These were enacted in 2022 and 2023 respectively.
Unlike Europe and to some extent China there isn’t any comprehensive AI legislation in the US yet. However, there are several pieces of legislation that are related to AI. These include the National Artificial Intelligence Initiative (NAII), a government-wide effort to promote the development and deployment of AI, and The Algorithmic Accountability Act of 2022 which requires companies to assess the impacts of the automated systems they use and sell. This Act aims to create greater transparency about when and how automated systems are used and to empower consumers to make informed choices about the automation of critical decisions.
Other examples of AI legislation in the US include the AI in Government Act of 2020 which established the AI Center of Excellence in the General Services Administration (GSA) and A Local Law to Protect Against Bias in Automated Decision Systems. Passed in New York City in 2021, this local law requires employers to conduct bias audits of AI-enabled tools used for employment decisions.
Finally, in Britain the AI (Data Protection, Transparency and Oversight) Act passed amendments to the General Data Protection Regulation (GDPR) in 2022 to include specific provisions for AI while the Online Safety Bill currently under consideration by Parliament will require social media platforms and other online platforms to take steps to mitigate the risks of harm caused by AI-powered content.
“I want to make the UK not just the intellectual home but the geographical home of global AI safety regulation,” UK Prime Mininster Rishi Sunak told the media during an event at the recent London Tech Week (June 2023) before announcing the first global AI safety summit would take place in the UK later this year.
Who are we?
AI business provides news and analytical insights you can trust. We’re dedicated to offering an objective view of the AI space to enable better strategic decisions. Our editorial content focuses on the practical applications of AI technologies rather than hype, buzzwords and high-level technology updates.
55k+
Monthly Unique Users
100k+
Monthly Page Views
15k+
Newsletter Subscribers
60k+
Followers on Social Channels
You can stay up to date with all that’s happening in the fast-paced world of AI through our media site, newsletter, podcast, live events and more. Want to Partner With Us? If you’re interested in associating your company with AI or getting your brand front of enterprise decision makers and practitioners, we offer a range of partnership opportunities. Showcase your knowledge and expertise through our suite of digital marketing solutions, increase brand awareness with onsite advertising, or work with us to build a multi-faceted campaign that supports your marketing objectives.
(link here:
Discover the opportunities
Discover Technology that connects the dots
Assess the marketplace, advance your business and amplify your presence in the market using our deep knowledge of tech markets and our actionable insights. The experts at Omdia bring unparalleled, world-class research and consultancy to navigate the now and create the future.
AI Business
Omdia