Tranformative Horizons: A deep dive into the Generative AI revolution
Applied Intelligence Group
in partnership with:
While the first smartphones captured people’s imaginations and wallets, there has never been as much debate as there has been about the impact AI will have on our lives - from the way we study to the way we work and interact with one another. Of particular interest right now is generative AI. A technology that enables the creation of novel outputs t (including text, images, video, speech, 3D objects and coding) based on a set of instructions or prompts, generative AI works by learning the patterns and structure of existing data. It then uses this knowledge to generate new data that has similar characteristics. Undoubtedly, generative AI has the potential to revolutionise industry, creating new business models, reducing operational costs and driving productivity.
It’s fair to say that Artificial Intelligence (AI) is making a significant impact on the public consciousness.
Introduction:
However, as with any new technology, the roll out of generative AI applications, as well as AI more broadly, leads to challenges – not least ensuring that it is done responsibly and safely. Last November saw the UK host the world’s first AI Safety Summit at Bletchley Park, home of the World War Two code breakers. And in December 2023 the EU finally passed its landmark Artificial Intelligence Act which is due to come into force next year. Looking to the future, organizations will be able to use the latest Generative AI applications to exploit new business opportunities, if they are not already. However, doing so will inevitably present numerous challenges along the way. In this report we look at the current market drivers within the emerging landscape, potential use cases and the growing (and potentially confusing) legal framework that businesses will have to navigate if they are to take advantage of everything generative AI has to offer over the coming years.
Given the enormous social, economic, and ethical implications of Gen AI (generative AI), it is no exaggeration to say that we could be witnessing the next transformational technology, as or more disruptive than the origin of the steam engine, car, or internet.
The Generative AI Revolution: Understanding, Innovating and Capitalizing report, Omdia.
"
CURRENT MARKET LANDSCAPE
#TheAISummit
In partnership with:
london.theaisummit.com
One of the best-known examples today is OpenAI's ChatGPT, a generative AI application based on the GPT 3.5 and now 4 large language model (LLM). Other examples include PaLM and Gemini (formerly Bard), which are both from Google. Typically, these are trained used using a technique known as deep learning, a type of machine learning that uses artificial neural networks to learn from data. This can include books, articles, code, and other forms of text.
Current market landscape
Omdia, a leading research group, and global media portal AI Business at Informa Tech, are partnering with Women in AI, a non-profit community of 13,000 members, 200 volunteers and 150 countries whose mission is to empower women and minorities to become AI and data experts, innovators and leaders in the advancement of ethical applications and the responsible use of AI.
Although generative AI was first introduced in chatbots in the1960s, it wasn’t until 2014 with the arrival of the generative adversarial networks or GANs - [a type of deep learning model architecture]- that generative AI could create convincingly authentic images, videos and audio of real people.
Generative AI: Market Opportunity
For example, McKinsey claims that GAI has the potential to revolutionise the entire customer operations function, improving customer experience as well as agent productivity. Increasingly, GAI is being deployed by sales and marketing teams to create personalised messages for customers as well as first drafts of brand advertising, social media posts and product descriptions.
Contact us
SCROLL
However, GAI applications aren’t just restricted to text, with Gen AI now being used for image and video generation too. OpenAI’s DALL-E 3 is an AI image generator is hugely popular in this space. Created by OpenAI, it is now integrated into several platforms including Microsoft’s AI chatbot Microsoft Copilot (formerly BingChat). Users enter text descriptions into the system and the software produces realistic, original images based on images found on the internet along with their patterns and captions. Also recently unveiled by OpenAI is Sora (the Japanese word for Sky), an AI tool that claims to be able to create cinematic-quality videos of up to one minute long using only text prompts. In February 2024, Sam Altman, OpenAI’s CEO, invited users on X to submit prompts for videos that Sora then created.
75%
While it is Gen AI’s consumer applications that have understandably grabbed the headlines, the technology is already having a huge impact on the business world in terms of growth and profitability. Gen AI could completely transform industry, raising global GDP by 7%, or almost $7 trillion, according to Goldman Sachs.
“Transforming Business Applications"
of the total annual value from generative AI use case
That the use of generative AI is growing rapidly is something of an understatement.
The market for GAI applications erupted in 2023, as startups and hyperscalers released a wave of large language and diffusion models, the basic building blocks for GAI. Omdia’s recent Generative AI Market Forecast (September 2023) reveals that the market for GAI applications will grow from $6.2 billion in 2023 to $58.5 billion in 2028, a CAGR of 56%. Looking more broadly at the AI software market, Omdia estimates GAI will add $3 billion to the AI market estimate in 2023 and $13 billion in 2027, leading to a total AI software market of $160 billion in 2027.
Current Market Landscape
10% of data produced by generative AI by 2025
Chatbots, digital engagement tools, and interactive self-help solutions are likely to be among the first wave of use cases to make use of Gen AI’s (generative AI) natural language interface. Gen AI’s ability to build on a response in ways that mirror human conversation can potentially give automated customer service interactions more flexibility.
One company that has recently embraced the technology is cloud-based software organisation Salesforce, which announced a Gen AI for CRM (Customer Relationship Management) last year. Called Einstein GPT, the system will be able to create emails, craft customer service responses and even generate code. It will also make Einstein GPT compatible with collaborative business tool Slack, which it owns. Research from the National Bureau of Economic Research found that one company that introduced a Gen AI-based conversational assistant for its 5,000 customer service agents increased issue resolution by 14% an hour and reduced the time spent handling an issue by 9%. It also reduced agent attrition and requests to speak to a manager by 25%.
Software engineering
It’s not just customer operations and sales/marketing where Gen AI is expected to have a huge impact either. With the use of natural language in GenAI to generate code, a range of possibilities open up for software engineering. For example, Gen AI can be used to generate code such as HTML, CSS and Javascript as well as identify and fix code by recognising and analysing any patterns that are associated with bugs. Speaking at the recent World Government Summit in Dubai, Nvidia CEO Jensen Huang argued that because of the rapid advancements made by AI, “learning to code should no longer be a priority of those looking to enter the tech sector.” He added: “It is our job to create computing technology such that nobody has to program. …This is the miracle of artificial intelligence.” However, for those already in the industry, cloud-based Gen AI tools such as GitHub Copilot can help write code. Powered by OpenAI’s Codex language model, it is trained on a massive dataset of text and code. Meanwhile, other Gen AI tools are being released for coding, including Microsoft IntelliCode, Alibaba Cloud Cosy, AIXcoder and TabNine.
For generative AI can be used to generate code such as HTML, CSS and Javascript as well as identify and fix code by recognising and analysing any patterns that are associated with bugs.
One cloud-based generative AI tool that is helping software engineers to write code is GitHub Copilot. Powered by OpenAI’s Codex language model, it is trained on a massive dataset of text and code. Meanwhile, other generative AI tools are being released, including Microsoft IntelliCode, Alibaba Cloud Cosy, AIXcoder and TabNine.
By automating tasks and improving the quality of code, it is hoped these tools will help software engineers to become more productive and innovative. “Looking ahead, GAI-powered coding could (also) radically democratize software development by offering non-technical users a path to bring new products to life,” add the authors of the Omdia generative AI report.
According to McKinsey’s analysis, the direct impact of AI on the productivity of software engineers could range between 20% to 45% of current spending on the function. This value arises from reducing the time spent on tasks such as generating initial code drafts, code correction and refactoring. Another study found that software developers using GitHub’s Copilot completed the task 56% faster than those not using the tool.
McKinsey’s research, The economic potential of generative AI: The next productivity frontier (June 2023), estimates that Gen AI could add the equivalent of $2.6 trillion to $4.4 trillion across the 63 use cases it examined. Research from McKinsey shows that Gen AI could have an impact on most business functions. However, it identified four areas – customer operations, marketing and sales, software engineering and research and development – that could account for 75% of the total annual value from Gen AI use cases.
One area where Gen AI is set to have a massive social impact is in the healthcare and pharmaceutical industries. Omdia’s Principal Analyst Andrew Brosnan notes that Gen AI uses are “far-reaching across the sector, from improving drug R&D programs and enhancing patient experience, empowerment, and privacy through synthetic patient data to reducing the administrative burden inherent in many of the sector’s activities.” Here we look at just few areas where the technology is already proving transformational: Drug discovery: Whereas the traditional approach to drug discovery involves screening countless molecules to identify potential drug candidates, Gen AI models can be trained on vast datasets of existing molecules and their properties. This allows them to "design" entirely new candidate molecules with desired characteristics much faster than was previously possible. For example, Biotech pharmaceutical firm Entos has already paired Gen AI with automated synthetic development tools to design small-molecule therapeutics. Accelerate clinical trials: Designing and completing clinical trials requires gathering vast amounts of data, however Gen AI can give further support to clinical trials in several ways. For example, Pfizer is using the technology to personalise recruitment messages to potential patients, thereby helping to increase the number of patients who are willing to participate in clinical trials. Google and Medidata are using Gen AI to generate synthetic data that closely resembles real-world patient data. This data is being used to train machine learning models that can predict clinical trial outcomes. Drug safety: By training Gen AI models on data about adverse drug reactions, the technology can help to improve the safety of drugs and healthcare treatments. For example, Gen AI can be used to identify potential safety risks before they are introduced into clinical trials. Personalised medicine: Gen AI models can analyse vast datasets of patient data to predict how individuals might respond to different treatment options. This allows doctors to personalise treatment plans based on a patient's unique risk factors and potential outcomes. Gen AI can even be used to create virtual simulations of a patient's body and disease. This enables doctors to test different treatment options virtually and predict their effectiveness for the individual patient. Improved healthcare diagnostics: GenAI is being used in new diagnostic tools which are both more accurate and faster than traditional methods. This can help to improve the early detection and treatment of diseases. For example, Addenbrooke’s Hospital in Cambridge, England announced last year that it is working Microsoft to help doctors calculate where to direct the therapeutic radiation beams to kill cancerous cells. The same InnerEye technology also uses Gen AI to analyse medical scans to detect abnormalities, diagnose diseases and recommend treatment plans for cancer patients.
Omdia Report: The Generative AI Revolution: Understanding, Innovating and Capitalizing
Looking ahead, Gen AI-powered coding could (also) radically democratize software development by offering non-technical users a path to bring new products to life.
Download now
Healthcare
Although inevitably it is consumer applications of generative AI programs that have grabbed the headlines - such as their widespread use in student essays or in the creation of ‘deepfake’ images - undoubtedly it is the business world where the technology will have the greatest impact.
“Improving customer operations"
New generative AI applications
BACK TO TOP
GENERATIVE AI TIMELINE
The market for Gen AI applications erupted in 2023, as startups and hyperscalers released a wave of large language and diffusion models, the basic building blocks for Gen AI. Omdia’s recent Generative AI Market Forecast (September 2023) reveals that the market for GAI applications will grow from $6.2 billion in 2023 to $58.5 billion in 2028, a CAGR of 56%. Looking more broadly at the AI software market, Omdia estimates Gen AI will add $3 billion to the AI market estimate in 2023 and $13 billion in 2027, leading to a total AI software market of $160 billion in 2027.
Generative AI has very quickly brought the world to an inflection point, with the technology market rushing forward in what can only be termed a stumbling sprint,” notes Brad Shimmin, Omdia Chief Analyst, AI & Data Analytics.
Over the forecast period, Omdia expects the top four use cases for Gen AI will be virtual assistants, building generative models of the real world, writing assistants, and automated code generation and assist. By industry, consumer-targeted uses such as computer-aided art and photography will top the GAI market at $10.8 billion in 2028, followed by media and entertainment (game development and video and audio production and generation) with a market size of $8.1 billion. For gaming, 5% of games tech vendors tracked by Omdia’s Games Tech Market Landscape Database had Gen AI tools on offer as of March 2023, and many companies are now keen to play up their AI credentials. Omdia predicts that Healthcare will be the third largest industry at $6.2 billion. Here the top use cases will include image analysis, computational drug discovery and drug effectiveness, virtual assistants, medical treatment recommendation, and synthetic data generation (see Healthcare section).
Generative AI Timeline
2014
Creation of generative adversarial networks (GANs) deep learning architecture.
2016
OpenAI publishes research on generative model. DeepMind’s Wavenet uses generative audio for creating human-like speech.
2017
Transformer network enables advances in generative models.
2019
OpenAI releases Chat GPT-2.
2021
January
Open AI releases Dall-E, a deep learning model to create digital images.
April
The EU AI Act is first proposed.
2022
July
Image generation platform Midjourney launches.
November
OpenAI releases ChatGP.T
2023
March
GPT-4 launches.
September
OpenAI debuts DALL-E 3 and announces ChatGPT can finally browse the internet.
X introduces its own Gen AI model called Grok. Trained on user content, it aims to compete with ChatGPT. AI Safety Summit takes place at Bletchley Park, U.K.
December
Google announces Gemini LLM model.
2024
Feb
Google rebrands Bard Chatbot Gemini after LLM which powers it.
February
RISKS AND CHALLENGES
Google launches ChatGPT competitor, Bard.
Creation of generative adversarial networks (GANs) deep learning architecture
OpenAI publishes research on generative model. DeepMind’s Wavenet uses generative audio for creating human-like speech
Transformer network enables advances in generative models
OpenAI releases Chat GPT-2
Open AI releases Dall-E, a deep learning model to create digital images
The EU AI Act is first proposed
Image generation platform Midjourney launches
OpenAI releases ChatGPT
GPT-4 launches
However, the rapid adoption of the technology is also extremely controversial for several reasons. These include the huge impact on the workforce and society caused by many jobs being automated, the potential for discrimination in how the technology is implemented and the risk of bad actors spreading misinformation unchecked through Gen AI applications. That’s not to mention the challenges around IP (Intellectual Property), copyright and privacy that will inevitably result from using existing data (including text, audio and video) to generate new data. Below we look at just some of the risks and challenges of implementing generative AI solutions.
Gen AI risks and challenges
“
Right now, what we're seeing is things like GPT-4 eclipse a person in the amount of general knowledge it has…In terms of reasoning, it's not as good, but it does already do simple reasoning. And given the rate of progress, we expect things to get better quite fast. So we need to worry about that.
Dr Geoffrey Hinton, former Google employee
Undoubtedly the biggest challenge of AI, including Gen AI, is managing its impact on the workforce. AI has often been referred to as the catalyst for the next industrial revolution, automating many of the jobs that are currently carried out by people.
7%
potential increase of Global GDP by Generative AI
Research published by Goldman Sachs in April 2023 suggested that Gen AI could increase Global GDP by up to 7% while warning that 300 million full-time jobs could be lost to automation over the coming years. And although not all automated work will result in layoffs, Omdia states in its recent report, The Generative AI Revolution: Understanding, Innovating and Capitalizing, that Gen AI’s ‘ability to help with routine task automation, content creation, and customer service assistance have sparked fears of worker displacement and unemployment.’ The report continues: “Unlike with robotic automation, moreover, job displacement is likely to cut across both blue and white-collar work.”
It's a view shared by the Organisation for Economic Co-operation and Development (OECD) in its 2023 Economic Outlook: Artificial Intelligence and Jobs Report. In this report, it claims the occupations at highest risk from AI-driven automation are highly skilled jobs, representing around 27% of employment across 38 member countries including the U.K., Japan, Germany, the U.S., Australia and Canada.
Impact on workforce
27%
of employment roles are highly skilled jobs, and are at the highest risk from aI-driven automation
As we have seen from the previous chapter, GAI has the potential to revolutionise many industries. The technology can help organizations become much more efficient, create new business models and even help in the discovery of life-saving medicines.
The body said “occupations in finance, medicine and legal activities which often require many years of education, and whose core functions rely on accumulated experience to reach decisions, may suddenly find themselves at risk of automation from AI.” Ian Hogarth, head of the U.K. Government’s AI Foundation Model taskforce, has also admitted ‘it was inevitable’ that more jobs would become increasingly automated, warning that that the whole world will have to rethink the way in which people work. "There will be winners or losers on a global basis in terms of where the jobs are as a result of AI," he told the BBC last year. Sounding a more positive note, however, Goldman Sachs pointed out, jobs ‘displaced by automation have been offset by the creation of new jobs,’ with around 60% of today’s workers employed in occupations that didn’t exist in 1940. As a result, it is difficult to predict with any accuracy how many jobs will be lost by AI and how many will be gained.
Generative AI bias
Gen AI models learn from the data that is used to train them. If this data contains biases, stereotypes, or prejudices, then the system will inherit those and reflect them in its outputs. If, for example, the training data primarily reflects a specific demographic or viewpoint, the AI may not generate outputs that are relevant or representative of other groups. Inevitably, datasets can reflect either subconscious (or sometimes intentional) prejudices of the developers, many of whom are male (only 22% of professionals in the AI and data science fields are women, according to the Alan Turing Institute). Nor is it just gender bias. Often there is a race bias too. For example, the landmark Gender Shades project found that datasets comprising mostly white and male faces resulted in much lower levels of facial recognition among women, especially those of ethnic origin. The OECD has also outlined risks associated with AI’s growing influence over the workplace. Those include AI tools making hiring decisions, with the risk of falling foul of biased AI-driven decisions much “greater for some socio-demographic groups who are often disadvantaged in the labour market already”. Moreover, this potential to unknowingly reinforce erroneous or potentially harmful data, is complicated by the lack of explainability within Gen AI systems, claims Omdia.
Conversely, there have been examples of LLMs producing inaccurate results in their attempts to counterbalance AI bias. For example, in February 2024 image generation tools were withdrawn temporarily from Google’s recently launched Gemini LLM amidst claims of ‘woke depictions of historical scenes.’ These included pictures of the Founding Fathers comprising African Americans and an Asian woman dressed in military uniform when asking for an image of a Nazi-era German soldier. The tool was ‘not working as we intended,’ explained Google Deepmind CEO Demis Hassabis.
It is currently impossible to trace which sources a GAI system has used to inform its answers or content, and researchers are not entirely sure even how a Gen AI system comes to the conclusions it returns.
He told the Times that “increasingly today the battle of ideas and policies takes place in the ever-changing and expanding digital sphere”, before adding: “The era of deepfake and AI-generated content to mislead and disrupt is already in play.” For example, Donald Trump supporters recently created and shared AI-generated fake images of black voters to encourage African Americans to vote for the Republican party.
Risk of misinformation
As Gen AI technology continues to develop, it is likely that we will see even more sophisticated methods of spreading misinformation emerge. As the recent Omdia report states: many commentators have pointed to Gen AI as a tool that could be used for fraud and abuse at scale. Already we have noted several examples of high-profile deepfake images generated by AI (see previous chapter). In particular, concerns have been expressed at how Gen AI and AI more broadly could influence politics. Speaking before meetings with social media bosses in February this year, U.K. Home Secretary James Cleverly said that criminals and “malign actors” working on behalf of malicious states could use AI-generated deepfakes to hijack the general election.
Copyright infringement and IP challenges
Essentially, Gen AI works by creating something new from something existing. This means there is the potential for a breach of copyright if the artist, writer or even coder perceives their original work was used to create the computer-generated version.
Much depends on the training data that is used to set up the Gen AI system in the first place. If this contains copyrighted material, then there is a good chance the generative AI model could infringe on the original copyright. The EU AI Act (see next chapter) proposes that providers of Gen AI make a summary publicly available of any copyrighted material used for training their systems, including information about the identity of the copyright owners, the nature of the copyrighted material and the extent to which it was used. Last year, U.S. comedian and author Sarah Silverman announced she is suing ChatGPT creator OpenAI and Facebook-parent Meta, along with two other writers, over claims that their models were trained using their work without permission. And in December 2023, The New York Times sued OpenAI and its largest financial backer, Microsoft, accusing them of taking its articles without permission to train their language models. However, OpenAI claims The New York Times ‘hacked’ ChatGPT to build proof for the copyright lawsuit.
Last year, U.S. comedian and author Sarah Silverman announced she is suing ChatGPT creator OpenAI and Facebook-parent Meta, along with two other writers, over claims that their models were trained using their work without permission. And in December 2023, The New York Times sued OpenAI and its largest financial backer, Microsoft, accusing them of taking its articles without permission to train their language models. However, OpenAI claims The New York Times ‘hacked’ ChatGPT to build proof for the copyright lawsuit.
This followed the resignation of Dr Geoffrey Hinton from his role at Google earlier in the same month. Widely regarded as the ‘godfather of AI’ for his pioneering work in deep learning and neural networks, Hinton specifically cited concerns over AI chatbots which he described as ‘quite scary’. Below we look at just some of the risks and challenges of implementing generative AI solutions.
Back to top
AI DEPLOYMENT
https://omdia.tradepub.com/free/w_omdi228/?utm_source=em_OMDIA_x_OMDIA_rs_aud_x_x_LEADS_GenAIReport
While Gen AI offers exciting opportunities for businesses, implementing these solutions can be challenging. One of the main hurdles to overcome is around poor-quality data. Because Gen AI models are data-driven, poor-quality, or noisy, data that contains inconsistencies, or biases, can lead to unreliable or even potentially discriminatory outputs.
Gen AI Deployment: Key Business Challenges
Back
Next
When it comes to implementation, integrating Gen AI models into existing workflows isn’t always straightforward either. Careful planning and collaboration between different groups of people including data scientists, IT teams and business stakeholders is required with APIs helping to facilitate the smooth transfer of data. Importantly, staff also need to be trained on any Gen AI tools which are rolled out to maximise any benefits and negate any potential concerns that people may have about the use of Gen AI. Finally, maintaining and developing Gen AI models can be expensive, requiring specialist expertise and computing power. For companies without extensive in-house resources, cloud-based platforms such as Google Cloud AI, Microsoft Azure AI Services and Amazon SageMaker may provide a solution. For businesses lacking the resources to build a full-fledged data science team, partnering with universities or research institutions could also be a cost-effective way to access expertise and develop Gen AI solutions collaboratively.
“There are issues that companies need to be aware of, such as poor data quality, unfair bias, and lax security to name a few,” Mona Chadha, director of category management at Amazon Web Services told Forbes last year. “Quality of predictions of AI models depends strongly on the data used to train the models. Poor data quality can result in inaccurate results and inconsistent model behaviour, leading to lack of trust from customers and internal stakeholders.”
First proposed by the European Commission on 21 April 2021, the EU AI Act is the world’s first piece of dedicated AI legislation, although others are set to follow over the coming years (see Global AI legislation). Formally approved by the EU on June 14th, 2023, the act is expected to pass a final draft by the end of 2023 though most likely won’t come into force until 2025.
To combat these issues, organisations therefore need to invest in high-quality, well-curated datasets that reflect their intended use case. Implementing robust data cleaning and validation processes is crucial. Additionally, being aware of potential biases in the data and the AI model itself is essential. Techniques like debiasing algorithms and promoting diverse datasets can all help mitigate bias. Metadata labels can also help mitigate bias by providing more balanced and diverse information. For example, including labels for gender and ethnicity in image datasets can help reduce bias in facial generation. Another challenge for organisations is to ensure that Gen AI is used responsibly. Gen AI models can be misused to create deepfakes, to generate synthetic data for malicious purposes or to manipulate content in ways that could compromise privacy. Businesses need clear security policies in place around the use of GAI as well as robust security measures to protect training data and generated outputs.
AI LEGLISATION EU ACT
GAI Deployment: Key Business Challenges
Although regulatory law is focused on more than just responsible AI, the rapid rise of Gen AI has caused lawmakers to ensure the technology is deployed responsibly. As a result, governments across the world are now working on legislation to control how the technology can best be deployed to protect its citizens at the same time as providing genuine innovation.
AI legislation and the EU AI Act
Requirements
Transparency: Organisations must be able to understand how their Gen AI system works and how it generates content. This means that the developers of Gen AI systems will need to provide information about the foundational models that underpin them. The legislation will require services like ChatGPT to register the sources of all the data used to “train” the machine as well as the algorithms used to generate content. To combat the high risk of copyright infringement, the EU AI Act will also oblige developers to publish all the works of scientists, musicians, illustrators, photographers and journalists used to train them.
Accountability: The developers and operators of Gen AI systems must be able to explain how their systems work and why they generate certain content. For example, developers will need to have a clear understanding of the biases and limitations of their systems.
Robustness: Gen AI systems must be robust and reliable and must be able to withstand attack or manipulation. This means that developers will need to take steps to ensure that their systems are protected from a cyber-attack.
Diversity: The development and use of Gen AI systems must be inclusive and fair, and not discriminate against any group of people. This means that developers will need to be aware of the potential for bias in their systems and will need to take steps to mitigate these risks.
In addition to these requirements, Gen AI applications may also be affected by other provisions within the EU AI Act, such as the ban on unacceptable risk AI systems. For example, a Gen AI system that is used to generate deepfake images or video could under certain circumstances be considered an ‘unacceptable-risk’ AI system – categorised by the EU as a system that’s a threat to the ‘safety, livelihoods and rights of people.’
Negative industry response
EU AI Act
Unacceptable risks
Promoting responsible AI
As a result of the EU AI Act, businesses trading in Europe will need to put systems in place ahead of time to ensure that Gen AI systems are being used in an ethical and responsible way.
That’s because under the draft legislation fines for non-compliance with the prohibition of certain AI practices have been raised to EUR 40 million or 7% of the total worldwide annual turnover of the offender, whichever is higher.
That’s because under the legislation, fines for non-compliance are steep: either €35 million or 7% of the company's total worldwide annual revenue for the preceding financial year, whichever is higher. Speaking to AI Business last year, Clare Walsh, Director of Education at the Institute of Analytics talked about the need for organisations to act responsibly in rolling out generative AI solutions.
My advice to SMEs is to assume that the law will eventually catch up with negative practices. The information commissioner is very clear there will be no place to hide from reckless innovation.
Microsoft – the biggest investor in OpenAI - has also produced its own report, Governing AI: A Blueprint for the Future (May 2023) about the need for responsible AI. In it there are several recommendations including the creation of new government-led AI frameworks and implementing “effective safety brakes” for AI systems that control critical national infrastructure.
We are committed and determined as a company to develop and deploy AI in a safe and responsible way. We also recognise, however, that the guardrails needed for AI require a broadly shared sense of responsibility and should not be left to technology companies alone.”
However, while organisations have generally recognised the need for greater controls over AI applications, industry response to the tight restrictions in the EU AI Act have been far from positive. Produced in July 2021 a few months after the EU first proposed its AI Act, a report by the Center for Data Innovation warned that the Artificial Intelligence Act will cost the European economy €31 billion over the next five years and reduce AI investments by almost 20%. Then at the end of June 2023, a few weeks after the first draft of the EU AI Act was agreed, some of the biggest companies in Europe also took collective action to criticise the Act claiming that it was ineffective and could negatively impact competition. In an open letter sent to the European Parliament, over 150 executives from companies including Renault, Heineken, Airbus and Siemens stated their commercial concerns.
A European SME that deploys a high-risk AI system will incur compliance costs of up to €400,000 which would cause profits to decline by 40 percent.
“As engaged stakeholders of the European economic sector, we would like to express our serious concerns about the proposed EU Artificial Intelligence (AI) Act. In our assessment, the draft legislation would jeopardise Europe's competitiveness and technological sovereignty without effectively tackling the challenges we are and will be facing. This is especially true regarding generative AI. Under the version recently adopted by the European Parliament, foundation models, regardless of their use cases, would be heavily regulated, and companies developing and implementing such systems would face disproportionate compliance costs and disproportionate liability risks.
How should businesses prepare for the EU Act?
Nevertheless, despite reservations from some commentators, growing regulation is coming to the generative AI industry over the next few years. So what steps can organisations take to prepare?
Critically assess data governance practices for training of AI models in view of the potential requirements of the draft EU AI Act (as well as other global frameworks in the making).
Prepare the support documentation of the relevant AI system in line with potential conformity, documentation, data governance and design obligations.
Assess processes in place for reporting incidents related to AI systems.
Consider existing legislation: There are already laws in place which govern some aspect of AI. These include human rights laws in place dealing with discrimination as well as article 22 of GDPR which limits the circumstances in which AI can make automated decisions about an individual.
According to current GDPR legislation, “the data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her.”
First proposed by the European Commission on April 21, 2021, the EU AI Act is the world’s first piece of dedicated AI legislation, although others are set to follow over the coming years (see Global AI legislation). Formally approved by the EU in February 2024, it is expected to come into force in 2025. Gen AI tools such as OpenAI’s ChatGPT and Google Gemini are not formally categorised in one of the risk categories. However, Gen AI systems will need to be submitted for review before their commercial release. They will also be subject to several requirements outlined below.
Omdia Report: The Generative AI Revolution: Understanding, Innovating and Capitalizing"
Global AI legislation
While undoubtedly the EU is leading the way when it comes to widespread AI legislation, other regions are following suit, mostly with more targeted interventions either to promote or control its use.
China has been actively developing a regulatory landscape for AI for some time. And while there isn't a single, comprehensive "AI Act" like the one in the EU, China has taken a step-by-step approach with several regulations targeting different aspects of AI. In particular, China’s approach to AI regulation emphasizes national security, social order, and ethical considerations. In April 2023 the Cyberspace Administration of China (CAC) issued draft measures for the Management of Generative Artificial Intelligence Services, later finalized and implemented in July 2023 as the Interim Measures for the Management of Generative Artificial Intelligence Services. Under the measures, service providers are required to apply to the CAC for a security assessment and file their algorithms according to existing Algorithmic Recommendation Provisions. The draft measures also establish requirements for the data used to train or optimize generative AI products with generated content having to respect public order and socialist core values and not subvert state power, disrupt social order, be discriminatory, or violate intellectual property rights.
Unlike Europe and to some extent China there isn’t any comprehensive AI legislation in the US yet. However, there are several pieces of legislation that are related to AI. These include the National Artificial Intelligence Initiative (NAII), a government-wide effort to promote the development and deployment of AI, and The Algorithmic Accountability Act of 2022 which requires companies to assess the impacts of the automated systems they use and sell. This Act aims to create greater transparency about when and how automated systems are used and to empower consumers to make informed choices about the automation of critical decisions. Other examples of AI legislation in the U.S. include the AI in Government Act of 2020, which established the AI Center of Excellence in the General Services Administration (GSA) and a New York law to protect against bias in automated decision systems passed in 2021. This local law requires employers to conduct bias audits of AI-enabled tools used for employment decisions.
Published in 2021, the UK government’s white paper entitled Artificial Intelligence: Policy Framework for Innovation outlines the government's vision for responsible AI development and deployment. It emphasizes the importance of data protection, algorithmic transparency, and algorithmic accountability, but it doesn't propose a single law. Nor does the UK's Online Safety Act, which received Royal Assent in October 2023, directly address AI legislation either. Its primary focus is on requiring online platforms to tackle harmful content and protecting users, especially children. Instead, the UK currently relies on the Data Protection Act 2018 (DPA 2018) and the UK General Data Protection Regulation (UK GDPR) to regulate data used in AI systems. These laws ensure responsible data collection, storage, and use.
US:
UK:
According to London-based law firm Mayer Brown, practical steps that organizations might take to prepare for AI legislation both in Europe and the rest of the world include the following:
FIND OUT MORE
China:
U.S.:
Find Out More
Want to Partner With Us?
If you’re interested in associating your company with AI or IoT and getting your brand in front of enterprise decision-makers and practitioners, we offer a range of partnership opportunities. Showcase your knowledge and expertise through our suite of digital marketing solutions, increase brand awareness with onsite advertising, or work with us to build a multi-faceted campaign that supports your marketing objectives.
Discover Technology that connects the dots
1/3
AI Business covers the broad and growing range of business applications of artificial intelligence across industries and the globe. Our team of seasoned journalists provides daily coverage of the vast amount of news impacting businesses, bolstered by research and commentary from a team of Omdia expert analysts.
DISCOVER MORE OPPORTUNITIES
2/3
IoT World Today keeps readers at the forefront of IoT development to help drive business improvements. Our team of seasoned journalists provides daily coverage of the vast amount of news impacting businesses, bolstered by research and commentary from a team of Omdia expert analysts.
3/3
Assess the marketplace, advance your business and amplify your presence in the market using our deep knowledge of tech markets and our actionable insights. The experts at Omdia bring unparalleled, world-class research and consultancy to navigate the now and create the future.
IOT WORLD TODAY
AI BUSINESS
Informing, Educating, And Connecting The Global AI Community
AI business provides news and analytical insights you can trust. We’re dedicated to offering an objective view of the AI space to enable better strategic decisions. Our editorial content focuses on the practical applications of AI technologies rather than hype, buzzwords and high-level technology updates.
Monthly Unique Users
55k+
100k+
Monthly Page Views
15k+
Newsletter Subscribers
60k+
Followers on social channels
Connecting You to the Breakthroughs & Insights from the World of IoT
IoT World Today provides insight, news and analysis on the Internet of Things, featuring the latest stories, case studies and real-life use cases across the ever-evolving world of IoT technologies. From smart cities to flying cars, robotics and everything in between, our mission is to offer a unique insight into the complexities of IoT, its challenges and its opportunities, so that our readers stay ahead of change and feel empowered to make smarter decisions.
202K+
296K+
61K+
PREVIOUS
HOME
AI BUSINESS:
If you’re interested in associating your company with AI and getting your brand in front of enterprise decision-makers and practitioners, we offer a range of partnership opportunities. Showcase your knowledge and expertise through our suite of digital marketing solutions, increase brand awareness with onsite advertising, or work with us to build a multi-faceted campaign that supports your marketing objectives.
Join us at the flagship AI summit of London Tech Week, the place where commercial AI comes to life. Unlock the latest insights and hear from experts as they explore AI use cases, the latest developments in cutting-edge AI technology and take away the necessary skills for AI implementation in business. The AI Summit London unites the most forward-thinking technologists and business professionals to explore the real-world applications of AI. Think unparalleled opportunities for learning, deep-dive discovery, and non-stop networking (not to mention the incredible line-up of heavyweight speakers).
Your AI journey starts here
Register Now
12-13 June 2024 Tobacco Dock, London
Contents
Risks and Challenges
Gen AI Deployment
AI legislation