Powered by Ceros

https://www.inc.com/jason-aten/why-microsofts-ceo-satya-nadella-says-its-co-pilot-ai-assistant-will-be-as-significant-as-pc.html

i

Signup for Mastercard Signals

hide sources

At Mastercard, we're already working with these emerging technologies, employing them to safeguard the more than 125 billion transactions on our network every year. Our teams of thousands of AI engineers, data scientists and technologists are committed to developing practical solutions that integrate privacy and ethics by design and adhere to the highest standards in security. Across our capabilities — data intelligence, open banking, identity, fraud protection and cybersecurity — Mastercard ensures trust is at the forefront and technology is used responsibly and ethically. By embracing technological change while addressing its challenges, we can step into the future of commerce and ensure technology influences the world for the greater good. 

return to contents

The convergence of these technologies represents a shift in how technology is applied across sectors. 

The future of tech convergence 

Unstructured data has exploded online, especially in social media. In 2022, it accounted for 90% of data created.lxvi But because older forms of AI cannot process it, it's been underutilized for analytical purposes. As generative AI can handle it quickly, that could change, and massive quantities of hitherto under-utilized information could lead to new insights. 

Unstructured data 2022

Social media’s share of total data created in 2022

90%

The market value of AI in biotech is expected to grow by almost 30% annually through 2032.lxiv The most exciting of gen AI's possible applications in biotech is the costly and time-intensive drug discovery process. It could prove effective there for its data-processing abilities, its utility in creating new pharmaceutical molecules and in protein engineering, and its help in modeling the potential outcomes of clinical trials — among other use cases.lxv

Amazon Web Services and NVIDIA, the hardware company whose GPUs and CPUs have been vital to AI, are collaborating to create next-gen cloud infrastructure and software to power ongoing breakthroughs in gen AI lxi by facilitating foundation model training and app development.lxiii  

AI

Data

+

AI's effectiveness heavily depends  on the quality and accessibility of data. Through tokenization and stringent data management, data integrity and quality are improving for future AI training and inference processes. AI algorithms are increasingly effective in analyzing data, identifying patterns and detecting anomalies, contributing to enhanced data integrity and more insightful analytics. 

Data

Compute

As data volumes grow exponentially,
the role of computing capabilities in  data processing becomes more crucial. This synergy is particularly evident in areas like real-time analytics and machine learning algorithms, where the need for immediate and efficient data processing is paramount. Advances in computing technologies are enhancing data processing speeds to enable the extraction of insights from large datasets with greater efficiency. 

+

AI

Compute

The fusion of AI with advanced  computing resources is enhancing  the capabilities of AI models. High-performance computing and specialized hardware like GPUs are pivotal in training expansive and complex AI models. Concurrently, AI is transforming the computing infrastructure, optimizing  and automating resource allocation and
workload management. This synergy enables more efficient processing,  leading to quicker and more accurate  AI-driven insights.

+

Here are some of the ways these trends intersect: 

"We have waited for this moment where the data and the compute and the GPUs and all the technology come together, and I would say it’s probably the single most exciting period in technology in decades.”

Safra CatZ

| Oracle CEO lxvii

As technology evolves, ensuring that data protection and management measures are

The financial burden of maintaining state-of-the-art data security measures can be significant, particularly for smaller organizations.

Data must be kept secure, but it must also remain accessible to authorized personnel. Overly restrictive data policies can hinder operational efficiency and innovation, but insufficient ones could make breaches possible.

Complying with international data-related regulations (like the GDPR, the CCPA and the EU AI Act) is complex for both multinational corporations that work across jurisdictions and small and medium-sized companies with limited resources and in-house expertise. Non-compliance 
can lead to hefty fines and reputational damage.

Developing effective security measures to protect against advanced malware, ransomware and phishing attacks is increasingly challenging as bad actors improve their capacities.

In some markets, there could be a shortage of skilled professionals in data science, analytics and AI, which can hinder the development and operation of insight factories.

Establishing and maintaining an adequate data analytics infrastructure can be costly, especially for small to medium-sized enterprises.

Responsible data collection and utilization practices are key, especially when it comes to personal data used for predictive analytical purposes.

Assimilating data from various sources and formats and ensuring that it is harmonized and consistent for analysis can be complex and resource-intensive.

Data subject to analysis must be accurate, complete and reliable. Poor quality data can lead to misleading conclusions and flawed decision-making.

Tokenization and digital assets are complex matters that can be misunderstood. A lack of understanding by users can hinder adoption and trust.

There are concerns about how tokenized personal information is used and stored and who has access to it, especially in the case of biometrics.

While tokenization enhances security, it's not immune to cyber threats. Smart contract vulnerabilities, blockchain platform security breaches and the risk of token theft remain concerns.

Navigating the complex and evolving regulatory landscape surrounding digital assets and tokenization is a significant challenge. Jurisdictions have varying regulations regarding using and trading tokenized assets, impacting their adoption.

Dealing with the heat generated by high-performance computing so it does not impact performance and longevity is a persistent challenge.

New technologies like quantum may take longer to develop than expected given the hype surrounding them. Unrealistic expectations could lead to disillusionment and potentially less funding for research and development.

More computing power means more data centers, which are energy intensive. Now responsible for an estimated 3% of the world's energy use, data centers could account for 4% by the decade's end.xliv Green computing and data center practices, which stress circularity and efficiency in energy use and materials, will be necessary to counteract this demand.

Demand has led to periodic GPU shortages, leaving businesses to wait.

Interconnected devices that collect and transmit data can capture vast amounts of information about people and serve as targets for hackers. Edge devices like sensors integrated into Internet of Things (IoT) systems can be particularly vulnerable.

Disparities in access to advanced network technologies across different regions and socio-economic groups could lead to unequal development. They require consideration.

Ensuring that different devices and systems can communicate seamlessly is a challenge, particularly given the vast array of manufacturers and standards.

Bridging intelligent networks with legacy systems and maintaining the resultant stacks are complex tasks.

Spatial computing platforms and tech vendors will likely have access to novel types of user information. Users’ physical movements could betray sensitive health information. Data from eye-tracking solutions could indicate what users look at. This could deepen the already charged societal conversation about data privacy and collection.

Societal unease with immersive technologies may have played a role in their failure to achieve mass market success by now. They have been blamed for problems including obesity, myopia, depression, social atomization, dissociation from reality and beyond. Regulatory and industry guidelines on the proper use of these tech tools could emerge. 

Developing sufficient high-quality content and applications to attract and retain users will be crucial for the mainstream adoption of immersive tech. The bar is high: Content and apps need to be compelling enough for users to purchase expensive hardware and commit to wearing it despite potential inconveniences like discomfort and alienation from surrounding environments.

Achieving high accuracy in detection is crucial to avoid the consequences of false positives or negatives.

Accuracy  issues

Increasing public education about deepfakes is vital to mitigate risks.

Lack of public awareness

Keeping up with the development of deepfake technology is a constant challenge for makers of detection tools.

Advancementof deepfake

Given the range of threat sources, global coordination is required to develop content authentication approaches.

Lack of

HEALTHCARE

Ensuring the code generated by AI is reliable, efficient and bug-free requires human oversight. There's also the risk of developing functionally correct code that is inefficient or under-optimized for its specific use case.

Integrating AI tools with existing development environments and workflows can be complex and may require significant adjustments. 

IP issues could arise as AI coding assistants possibly use bits of copyrighted code in their training. A class-action lawsuit is pending against developers’ platform GitHub, its owner Microsoft and OpenAI for using platform users’ code and other materials in creating GitHub Copilot.xiii

It still needs to be determined whether developers can express complex algorithms precisely enough in natural language for an AI to generate appropriate code.

Avoiding

As developers gain confidence in AI-generated code, they may be tempted to "cut and paste" it directly into their codebases, which could have negative results. Gen AI tools require human oversight, and hand-coding will remain necessary.  

The first chatbot answered questions transmitted from an electric typewriter to a mainframe. It was named Eliza after Eliza Doolittle in the play “Pygmalion."iii

Conversational voice assistants including Siri and Alexa populated billions of pockets and kitchens worldwide.

Despite advances, AI assistants are still evolving and may face challenges in understanding context or managing complex tasks.

Building trust is essential given concerns about AI replacing humans in the workforce and knowing too much about individuals.

AI assistants may require access to personal data for some use cases. Safeguarding this data and ensuring user privacy is paramount. 

Ensuring AI decisions are responsible — transparent, fair and unbiased — is crucial. Outputs must be carefully managed to avoid reflecting, or even amplifying, societal and underlying data biases.

Jump to section

Jump to section

Jump to section

In today’s evolving technological landscape, businesses must adapt swiftly to harness new opportunities and remain relevant. This issue of Mastercard Signals explores tech trends poised to reshape commerce over the next three to five years.

Advances in three areas — artificial intelligence, computational power and data technology — are converging to propel these trends forward. As they spur innovation, technology will become more intuitive, interactive, immersive and embedded in our daily lives — with significant implications for finance, retail and other sectors.

Emerging


Signals

Q1 2024

AI

Compute

Data

Convergence

Audrey, a six-foot rack of electronics at Bell Labs, was the first machine to recognize speech — the digits zero to nine spoken by its inventor.ii

Microsoft introduced Clippy, an animated paper clip, to assist users of its Office products.iv

Gen AI models train on vast amounts of internet data, some of which is copyrighted. Users of gen AI outputs could unwittingly expose themselves to legal issues. In December 2023, The New York Times became the first major U.S. media corporation to file a gen AI-related copyright infringement suit, against OpenAI and Microsoft.

Creating lightweight, comfortable, unobtrusive, affordable and stylish hardware for mass market adoption has been problematic. A new crop of devices still has far to go in miniaturization and wearability. Numerous devices, including Google Glass and Magic Leap headsets, have struggled to gain wide adoption. 

People carried personal digital assistants that synched with their desktop computers to manage contacts and calendars. Early models included the Tandy Z-DPA, the Apple Newton MessagePad and USRobotics’ Palm Pilot.

Microsoft, Google, Meta and Amazon  incorporated gen AI into their consumer and office products, improving the capabilities of virtual assistants.