Global cybercrime estimated cost 2029 | Statista Global cybercrime estimated cost 2029 | Statista
1. 2.
Sign up for Mastercard Signals
Sources
next
home
Navigate issue below
Share your feedback
listen to this section · 3:50 min
Emerging tech
Seven signals shaping the future
Competing in the age of embedded intelligence
Technology strategy is crossing a threshold. Where once it was about automation and cutting costs, now it’s structural: shaping decisions, orchestrating operations and determining the bottom line.
This changes how firms compete, but the shift is happening faster than most can adapt to it. There’s a risk of investing in the wrong tools; but there’s a more consequential risk of ceding control over how decisions are made, as intelligence embeds itself into devices, agents, and infrastructure beyond direct human oversight.A common arc runs throughout: intelligence is slipping out of centralized systems and embedding itself in the core machinery of business. Decisions that once sat deep inside cloud systems or office workflows are moving to the edge — to devices, robots, autonomous agents, and inside the systems that power payments, logistics and customer experience. This is not drift; it is a fundamental realignment in how intelligence is built, deployed and trusted:
executive summary
In this issue
Distributed intelligence
AI gets closer to the action
view section
Power Struggle
Efficiency becomes a strategic priority
Confidential computing
Privacy tech shifts from shield to engine
Quantum fusion
Hybrid computing opensa high-performance path
Agents as brokers
Negotiation, distribution and the new battleground
Embodied AI
Robots step into real workflows
Product development
Agents evolve from tool to teammate
Signals
May 2026
Conclusion
Productdevelopment
Power struggle
Agent as brokers
The upside is significant: faster cycles, more resilient operations, new structures and products that adjust continuously to real-world conditions. But the deeper implication is more demanding: As intelligence becomes embedded, leadership changes — from deploying tools to deliberately shaping the conditions under which software is allowed to decide and act. These trends illuminate how that environment is evolving — and where the next edges of risk and competitive power will form.
Distributed architectures move decision-making to the point of need.
Where it acts
Embodied AI carries that capability into real world environments.
How it decides
Agentic systems negotiate, commit and transact autonomously.
What constrains it
Efficiency pressures determine what can scale sustainably.
How it’s trusted
Privacy centric architectures catalyse safe, compliant and data driven collaboration.
The competitive advantage in AI is shifting from “bigger is better” toward operational optimization. As costs rise and constraints bite, organizations that can’t run intelligence close to decisions — inside devices, workflows and infrastructure — will be structurally slower, costlier and harder to govern. This is a redesign of intelligence for the real world.
Beyond size
Specialization beats scale
Over the past decade, AI breakthroughs were driven by scale: larger models, more data, more compute. That approach is now facing constraints: rising costs, limited high-quality data,1 heavy energy use2 and diminishing performance gains.3
The accuracy of frontier models (massive, sophisticated LLMs) from providers like Google, Meta, Anthropic and OpenAI is now plateauing, even as they require exponentially more capital and power to grow.
Diminishing returns4
There are still breakthroughs: Anthropic’s Mythos Preview, for example, has crossed a capability threshold that most LLMs have not, by demonstrating autonomous, multi-step action5. But another trend sees intelligence moving out of centralized LLM clouds and closer to where data is generated. Rather than routing every decision through distant data centers, AI is increasingly running on devices and at the edge of networks—where speed, reliability, and autonomy matter most. This shift is especially important for robotics, industrial systems, security cameras, wearables, and vehicles, where milliseconds can determine safety, performance, or user experience. As a result, tomorrow’s AI advantage won’t come just from sheer size, but also from training strategies and architectures designed for deployment, including:
Domain-specific and small language models (DSLMs and SLMs) built forspecific jobs and deployed inside enterprise systems, and on devices at the edge.
Large tabular models (LTMs) built for structured data; they outperform language models on huge numeric datasets.
Efficiency techniques that deliver more capability per unit of compute.
This shift is already reshaping investment priorities — across chips, cloud, devices and enterprise software6 — as well as the trajectory of AI innovation.7 But it also increases the governance imperative. As intelligence spreads across thousands of models embedded in systems, devices and partners, organizations must ensure decisions remain explainable, auditable and reversible at scale.
2030: A plausible future
Distributed AI reinvents decision-making
Routine decisions in commerce and operations are handled by a mesh of specialized models operating across edge and cloud. A payment terminal flags a transaction anomaly locally and acts immediately; only uncertain cases take the round trip to a cloud model that can draw on broader context. Learning loops continuously capture edge cases, generate new examples and update deployed models incrementally — improving performance without full retraining. For businesses, this changes how intelligence scales. Systems become more resilient to outages, more responsive in real time and more economical to run. But it also changes how intelligence is governed: thousands of models making decisions across devices, locations and partners must still remain accountable. This is where distributed intelligence stops being a technical capability and becomesa leadership challenge.
Why it matters
Lower latency, greater uptime, data stays put
When models fit on everyday devices, intelligence moves to where decisions happen, delivering four advantages:
Local decisioning cuts delays and lets systems operate through network disruption — critical for real‑time payments and risk controls.8
Latency and uptime
Sensitive data stays where it is generated, reducing exposure and easing compliance with privacy and AI regulations.
Data locality
Compact models lower infrastructure barriers, enabling adoption by smaller businesses, public service and emerging markets.10
AI inclusion
Momentum
Real-world gains and market bets
In specialized workflows, DSLMs often deliver better performance than general-purpose large language models (LLMs) because they’re tuned to the domain’s formats, vocabulary and decision rules, improving accuracy while reducing latency and cost.11 As a result big tech leaders like Microsoft, Google, Meta, Nvidia and others are aggressively marketing enterprise DSLMs.
Gartner predicts that by 2027, organizations will use small, task-specific AI models three times more often than general-purpose LLMs.12
Market shift
Microsoft, Google, Meta, Nvidia and others are aggressively marketing enterprise SLMs.
Big tech leans in
Impacting the bottom line
Shifting high-volume tasks from usage-based cloud APIs to specific and small language model can slash inference costs by up to 90%.12
By 2028, 80% of data used for AI is expected to be synthetic.14
By 2027, organizations will use small, task-specific AI models three times more often than general-purpose LLMs. X
At the model and software layer, sparsity is changing the economics. Architectures such as mixture of experts (MoE) models activate only what’s needed for a given task, dramatically reducing compute. Other frameworks optimize memory usage and compute paths, allowing models to train faster on less hardware.
Efficient model architectures
At the hardware level, specialized AI chips offer an alternative to general-purpose processors for real-time use cases — a foundational shift for applications that require autonomy, immediacy and data locality. Google’s Edge TPU17 and devices like NVIDIA Jetson AGX Orin18 make it possible to run real-time AI processing on cameras, drones and robots — lowering infrastructure costs while improving responsiveness and reliability.
Purpose-built processors
Challenges
Bottlenecks, silos,regulation and risk at the edge
Demand for AI-ready chips outstrips supply. Companies face rising costs, long lead times and hard trade-offs between training, inference and edge deployment.14
Compute constraints & chip competition
In many enterprises, data is spread across incompatible formats, vendors and jurisdictions. This slows training, limits feedback loops and weakens self-learning systems. Fragmented data stacks prevent integrated analytics and delay AI initiatives, with nearly half of CEOs acknowledging that legacy data chaos blocks their AI progress.15
Fragmented data ecosystems
AI governance is diverging globally, with different regions pursuing unique regulatory regimes (e.g., the EU’s risk-based regime versus sectoral approaches elsewhere). This creates fragmented compliance burdens and market barriers for companies operating across borders.16
Regulatory inconsistency
Advantage will accrue to organisations that treat AI less like a model and more like an operating system: specialized models, clear escalation rules and continuous learning under tight control. In banking and payments, this means moving fraud, risk and authentication intelligence closer to where transactions occur — escalating only when necessary. For merchants, it means embedding intelligence directly into checkout, logistics and operations rather than bolting it on via the cloud. More generally, as open-source and cheaper AI models become more accessible enterprises will have greater flexibility in how they build and deploy, while incumbent providers will likely face increased competitive pressure. This dynamic is accelerating experimentation, lowering barriers to entry, and reinforcing the move toward modular, multi‑model architectures rather than dependence on a single vendor or model class.
Strategic implications
Win where work happens
Deliver more value per unit of compute
efficiency as strategy
Operate reliably under real-world constraints
latency, connectivity, privacy
Improve continuously without sacrificing safety or compliance
governed learning loops
Embed intelligence where cloud-only models cannot reach
edge-first decisioning
previous
Breakthrough technologies
The efficiency stack
AI’s next phase won’t be defined by smarter models alone, but by efficiency across the stack. Breakthroughs in model architecture, training and deployment are breaking the cloud’s monopoly on AI and pushing decision‑making closer to the point of action.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
betterdata.aiiea.orgopenreview.netarxiv.orgbbc.co.ukedgeir.com kbvresearch.comtranscript-iq.comarxiv.orgworldbank.orgarxiv.orghai.stanford.edugartner.comMastercard Global | Inside Mastercard’s new gen AI enginebain.com cio.compromarket.org thinkrobotics.com
Q1 2026
Physical AI
Continuous improvement loops that improve models using real-world data.
Compact models lower infrastructure barriers, enabling adoption by smaller businesses, public service and emerging markets.9
Cost discipline
Shifting high-volume tasks from usage-based cloud APIs to specific and small language models can slash inference costs by up to 90%.11
Large tabular models
Intelligence beyond words
LLMs deal with words, but most enterprises deal with numbers: Transactions, ledgers, prices and operational telemetry. This is where Large Tabular Models (LTMs) are impacting. They can interrogate structured, numerical data at scale to inform a whole range of decisions. Mastercard has been building its own LTM trained on a range of its datasets, not least the billions of transactions the business processes each year.13 The objective is to create an insights engine to improve tools and services like fraud prevention, cybersecurity, loyalty programs and small business tools.
The hardest challenge is governance: when intelligence is distributed across edge devices, partners and fast‑moving model updates, it becomes difficult to keep decisioning consistent, transparent and accountable. Traditional controls struggle to keep up with systems that learn continuously and act in real time. The result is a new leadership problem: how to build an operating layer that makes AI decisions traceable, enforceable and reversible without stalling deployment.
Governance
“Responsible AI needs to be a bedrock philosophy for businesses going forward. It’s more than an ethics overlay. It’s the infrastructure that allows AI to scale safely, reliably and with trust. And it needn’t slow innovation; effective governance is what allows innovation to move faster, with confidence.”
Greg Ulrich | chief AI and data officer at
This is a sea change in digital commerce. As AI agents begin to buy, negotiate and transact on behalf of humans and their businesses, competitive advantage will shift toward control of the operating layer that agents rely on. In agent‑mediated markets, power no longer sits only with whoever persuades the end customer — it concentrates toward whoever defines permission, legitimacy, and payment authority for software acting autonomously.
Forging the agentic operating model
AI agents are starting to act as brokers — searching, comparing, negotiating and transacting for consumers and enterprises. As a result, a new battleground is emerging. The prize is control of agentic commerce’s operating layer: the systems that validate authority, enforce intent, manage risk and execute payment when software — not a human — chooses. Platforms, retailers, marketplaces and payment networks are racing to create the trusted building blocks that agents will reuse at scale — verified product and merchant data, real-time pricing and availability information, identity and credential checks, dispute and liability frameworks, and compliant payment flows.
The agentic enterprise
A small manufacturer runs procurement through AI agents operating within owner-defined guardrails. When a machine fails, an agent sources parts, validates suppliers, checks policy constraints, confirms price thresholds, and executes payment — all without human review. The owner never sees most options. They see exceptions. In this world, competitive power doesn’t belong to the seller with the best pitch — but to the infrastructure that decides which sellers are even considered, under what terms, and with what authority.
Mastering the operating layer
As soon as purchasing decisions are delegated to software, the most valuable real estate in commerce becomes the systems that validate agent legitimacy, enforce intent, and execute payment with accountability. This agentic operating layer determines:
Authorization
Is this agent allowed to act — and within what limits?
Trust
Which offers, prices, signals and suppliers are considered valid?
Accountability
Who is liable when software makes a bad decision that costs money?
Execution
How are commitments made, paid for and audited?
Credentials and spend controls
How do agent‑initiated purchases use tokenized credentials with programmable limits and revocation?
Trusted inputs
Do agents receive timely, verified signals — pricing, availability, reviews, fraud scores, delivery forecasts — to shape decisions?
Micropayment rails
Can agents pay per request for data and services (e.g., fraud scores, availability checks, ETAs) efficiently and at scale?
Authorization and accountability
Are identity and permissions validated, risks checked, disputes resolved and liability clearly assigned?
$430 billion
2029
$1.3 trillion
Agents take the wheel
Spending on agentic AI systems is projected to more than triple from $430 billion in 2025 to $1.3 trillion by 2029, for a CAGR of 32%.23
2025
Gartner predicts 90% of B2B buying will involve AI agents by 2028, representing more than $15 trillion annually.24
90%
Morgan Stanley projects nearly half of U.S. online shoppers will use AI agent tools by 2030, potentially adding $115 billion to e-commerce sales.25
Rails for agents
Payment networks deliver agent-ready credentials
Mastercard launched Agent Pay in 2025 to let AI agents transact safely wherever cards are accepted, with programmable agentic tokens and built-in spend controls designed for delegated authority.19
Agent Pay and Agentic Tokens
Agent Payments Protocol | Verifiable Intent (Mastercard)
AP2 provides a common rulebook for agent-initiated payments built upon cryptographically signed mandates, verifiable receipts and cross-platform interoperability.20 Co-developed with Mastercard, Verifiable Intent is an open, standards-based trust layer to verify agent-led transactions.21
Players standardize mandates and interoperability
x402 Foundation
Enables AI agents to pay automatically, in tiny increments, as they interact — buying data, tools or compute only when needed. By removing checkout friction, micropayments make agent-to-agent transactions viable at internet scale, turning APIs and services into on-demand, machine-priced markets.22
Micropayment rails make machine-to-machine commerce viable
AGENT AUTHENTICATION
Trust and verification platforms are positioning to become default attestation layers for agent legitimacy. Trulioo’s partnership with AP2 is a clear example of how third-party identity will anchor delegated spend.58
Identity providers move early
Trust becomes programmable.
Identity, consent, spend limits and dispute handling stop being policy documents and start becoming machine-enforced rules.
Disintermediation risk rises
Providers that fail to embed themselves into agent permissions, credentials and payment flows risk being reduced to passive utilities.
What’s really at stake
Control moves from interfaces to infrastructure
If you don’t shape how agents decide, you inherit decisions made elsewhere
Visibility collapses
Traditional signals — clicks, impressions, conversion funnels — disappear when agents transact automatically.
This shift creates clear winners and losers:
Agentic commerce will reward software, not marketeers. Over the next few years:
Building trust
19202122232425
mastercard.comcloud.google.com/blogmastercard.comcoinbase.commy.idc.comzdnet.combusinessinsider.com
Excitement about agentic commerce is driving high growth expectations from analysts.
Banks and networks
that extend trusted authorization, programmable credentials and accountable payment rails will remain central; others will see volume routed around them.
that make their data, terms and identity machine-readable will be favored by agents; those that don’t will become invisible.
Platforms and marketplaces
will need to compete for agent selection, not human attention — optimizing for legitimacy, compliance and reliability, not just brand.
Merchants
The AI race is no longer defined by who builds the smartest models. It’s defined by who can power them. As demand accelerates, the true constraint on AI scale is resources — electricity, cooling, water, and physical space. This is transforming AI from a competition of algorithms into a competition for resources. For businesses, that means AI advantage will increasingly belong to those that can run intelligence cheaply, reliably, and within physical limits.
Rethinking the grid
The critical need to secure energy will see the AI industry active on multiple fronts. Startups will deliver small-scale nuclear reactors that provide zero-carbon power to AI grids. Working with governments, tech giants will aim to revive large-scale nuclear projects: Pennsylvania’s Three Mile Island nuclear power plant, currently slated for rebirth as a clean-energy model,43 will have been recommissioned, with similar efforts underway globally. Smart grid, battery and other technologies will see waves of innovation, even as AI hardware advances toward ever-greater efficiency. Energy necessity will drive sustainable invention.
The energy arms race behind scaled intelligence
The race to scale AI has triggered an energy arms race. Nations are competing not only for talent and advanced chips but also for the electricity required to power massive AI data centers - the central nervous systems of AI tools. For now, the drive for reliable power often depends on fossil fuels and AI remains far from sustainable. But competition is accelerating sustainability initiatives and sparking innovation. The result will be breakthroughs that change energy imperatives in the AI era.
The energy cost of intelligence
AI is resource-hungry. By 2030, AI data centers could require as much energy annually as does Canada,26 while by 2027 cooling them could use 4.2 to 6.6 billion cubic meters of water each year, exceeding Denmark’s consumption.27 According to the International Energy Agency, a ChatGPT query could require 10 times the energy of a Google search.28 Emissions are equally concerning: Output from AI data centers is expected to surge by almost 800% by 2030.29 Meanwhile, the AI boom drives intensive mining of rare-earth minerals,30 often through unsustainable practices. Innovations to address this are crucial. The industry cannot scale without major leaps in efficiency — and business leaders cannot scale AI without confronting its resource burden.
AI data centers: Rising energy demand and carbon emissions 31
The sustainable gold rush
Recent developments in the AI industry reflect a growing commitment to keeping the AI boom powered — sustainably.
Microsoft plans a $1.6 billion investment to restart Pennsylvania’s Three Mile Island reactor by 2028.70
Gas turbines are experiencing an unexpected revival as demand for always available power surges. Orders now stretch years out, though regulators are pushing back on fossil intensive solutions.32
The return of firm power
Big tech goes nuclear
Google is acquiring seven small modular reactors (SMRs) from Kairos Power, scheduled for delivery between 2030 and 2035.72
Amazon Web Services is exploring deploying small modular reactors in Virginia, as well as a partnership with Washington State’s Energy Northwest73 for in-state SMRs.
Fusion, biomass and alternative grids
Fusion startups are attracting unprecedented funding, strengthened by investment from AI companies hungry for longterm, clean power.36
Fusion startups are attracting unprecedented funding, strengthened by investment from AI companies hungry for long-term, clean power.36
soaring stock prices
Power grid tech companies saw soaring stock prices in 2025 thanks in part to data center build-out. Grid tech stocks as a group climbed by 30% year-to-date by mid-December. Grid-related expenditures worldwide were predicted to be 16% higher in 2025 than in 2024.76
All in for efficiency
Processing-in-memory (PIM) and compute-in-memory (CIM)
Processing-in-memory (PIM) and compute-in-memory (CIM) approaches dramatically cut data-movement energy, reducing overheads by up to 85–95%.38
New compute infrastructures
Next-generation chips
Next-generation chips — some claiming multiple orders of magnitude more efficiency — signal a future where compute is radically less power-hungry.39
FP8 precision formats and mixtureofexperts (MoE)
Innovations like FP8 precision formats and mixture-of-experts (MoE) architectures reduce compute needs while increasing speed. Breakthrough models trained with these techniques demonstrate that efficiency can now outperform brute-force scale.40
Smarter model designs
Shifting AI processing to the edge can reduce data-transfer loads by up to 90%, cutting costs and easing grid pressure.41Sectors like healthcare, retail, and manufacturing are early beneficiaries as intelligence moves into devices, equipment, and local networks.42
Benefits of the edge
Organizations need to master efficiency: running intelligence with less power, closer to the edge, and with architectures designed for a world where energy scarcity is real. Firms must be ready for heightened public and regulatory scrutiny of AI’s energy footprint, while securing reliable, cost‑effective power becomes core to their operational resilience. Investing in efficient chips, advanced cooling, and edge architectures will protect margins as AI scales.
Sustainable AI asa competitive differentiator
Red tape and green goals
Projects like smart grids, nuclear plant reactivations and the deployment of small modular reactors need regulatory approval and face the prospect of legal and political challenges. In the near term, AI expansion will continue to depend on reliable power, much of it still generated from fossil fuels. Today coal remains the top energy source for data centers and fossil fuels account for almost 60% of their power.44
accenture.comunric.orgunep.orgaccenture.comminingdigest.com accenture.comwsj.comtechtarget.comlatimes.comnbcnews.comtime.comfinance.yahoo.com/newsaccenture.comgeeky-gadgetsaccenture.commanta-tech.io/blogaccenture.com reuters.com carbonbrief.org
26272829303132333435363738394041424344
AI’s most serious constraint isn’t intelligence — it’s power. As models scale and demand accelerates, energy, water and infrastructure are becoming the limiting factors of progress. What was once a technical efficiency issue is now a strategic one, shaping where AI can run, who can afford it and how fast it can grow. The future of AI will be defined as much by how efficiently it is powered as by how intelligently it is designed.
Computing in space
Elon Musk has flirted with an idea to push data centers off the earth entirely and into orbit, where solar energy is abundant. In this imagined future, the cloud is no longer a metaphor, but a literal infrastructure in the sky.35 The technical challenges would be immense, however.
Nuclear energy is reentering the mainstream, with major enterprises investing in large-scale plant restarts and next generation small modular reactors (SMRs).33
Power grid tech companies saw soaring stock prices in 2025 thanks in part to data center build-out.Grid tech stocks as a group climbed by 30% year-to-date by mid-December.34
Biomass-powered data centers and next generation grid-tech providers are fast becoming part of the AI energy ecosystem.37
The technology model for powering AI is shifting fast. A new efficiency stack is emerging across hardware, software, and deployment models.
Embodied AI is where intelligence stops advising and starts doing — in environments where mistakes cost money, delays have consequence, and failure is visible. After decades of automation built for narrow, predictable tasks, humanoid robots are beginning to operate in spaces designed for humans: messy, dynamic, and constrained by safety, time and physics. A cosmetic upgrade to automation. This is a pivotal moment: When AI systems move bodies, carry loads and operate alongside people, software risk becomes operational risk — and that is something leaders must govern, not just debug.
From language to labor
AI is stepping into the physical world via humanoid robotics. It's making inroads into industrial, warehouse, home and other environments originally designed for the human body. The aim: an upshift in economic power and societal impact across manufacturing, healthcare, logistics and domestic life.
High-tech shift change
Across mature economies, cities are facing a simple arithmetic problem: fewer working‑age people and more services to deliver.49 Night shifts and hazardous roles are hardest to fill, and the workforce keeps aging. Humanoid “shift‑extenders” ease the strain where vacancies and risk converge, especially after dark. Using vision‑language‑action models (VLAMs), they work from simple natural‑language worklists, for example at a factory loading dock in the hours before dawn: “Inspect truck braking assemblies; swap filters in the engines of forklifts 1 and 2; stage pallets for the 06:30 delivery.” Every task includes enforceable guardrails: emergency overrides, continuous monitoring and mandatory logging. As robots act with greater autonomy, safety controls, audit trails and human escalation paths become non‑negotiable infrastructure — not optional features.
Demographics and demand
Two significant forces are impacting the labour market. On one side, fears about AI-driven job displacement are well documented: as software systems grow more capable, they can automate tasks that once required human judgement. On the other, across advanced economies, labour scarcity is becoming a structural problem. Shrinking populations, ageing workforces and rising service demand are reducing the supply of workers just as the need for hands-on work is growing — especially in sectors where substitution has historically been hard: healthcare 45, logistics 46, manufacturing 47 and public services.In this context, embodied AI is not about replacing workers. It’s about preserving system capacity where human availability, safety and fatigue become the hard limits. The question facing organizations is not whether humanoid robots will reach technical viability, but whether they can be integrated safely, reliably and economically into workflows already under strain.
Prime-age workers are becoming more scarce. The number of countries in which the working population is shrinking has risen from two in 1980 to 50 today and will reach 77 by 2040.
The world is running out of workers
15-64 year olds’ share of the overall population48
Healthcare is among the hardest-hit sectors. One estimate forecasts a shortfall of 10 million healthcare workers by 2030.23 And the challenge extends beyond this sector. U.S. manufacturing reported almost half a million24 open positions in spring 2025, a number expected to exceed 2 million25 by 2030. Similarly, most logistics companies26 say labor shortages are undermining their ability to fulfill demand.Humanoid robotics could be transformative in addressing chronic workforce deficits. It’s no surprise that the healthcare, manufacturing and logistics industries have been among the early adopters and innovators in the humanoid space.
Morgan Stanley predicts the global stock of humanoid robots could exceed 1 billion by 2050, though outcomes will vary significantly with adoption rates, regulation, and unit economics.51
will be used in industry and commerce52
Number of humanoids in homes in 2050, globally53
U.S. households with a humanoid in 2050 54
10%
2024
Cost of one humanoid55
$200,000
2028
$150,000
2050
$50,000
Factory floors and funding rounds
The robot race is accelerating — from mass deployments in Chinese factories to billion-dollar funding rounds and breakthrough technologies. The aim is to build robots that can rival and even surpass the versatility and utility of the human body, making them a driving force of the global economy.
5,000
10,000
PROJECTED HUMANOID BUILD
Large-scale humanoid deployments
Chinese humanoid manufacturer UBTECH has begun delivering thousands of robots to industrial facilities in China in the “first large-scale industrial deployment of full-size humanoids anywhere in the world.”56 The company projects building 5,000 humanoids in 2026 and 10,000 in 2027.
Project Prometheus, a new “physical AI” venture from Amazon founder Jeff Bezos, has secured $6.2 billion in funding, making it one of history’s most heavily financed early-stage startups.61 It will concentrate on AI applications in manufacturing and engineering and represents the first time that Bezos has held an executive role since he ceded his CEO position at Amazon in 2021.62
Chinese hardware giant Foxconn has announced plans to deploy humanoid robots at its Texas facility, which makes AI servers for Nvidia.57
Pivotal shift in investment
Former leaders of Tesla, Nvidia, Hugging Face and Google DeepMind have announced the formation of UMA (Universal Mechanical Assistant), a robotics startup focused on humanoids.36
Robotics startups collectively raised $2.3B in Q1 2025, with 70% of the money flowing to “specialized robotics startups” that work in domains like logistics automation and automated picking.37
From data to dexterity
VLAMs (vision-language-action models)
For the first time, AI systems are gaining the perception, coordination and control needed to operate in environments built for people, not machines. This will be a step‑change in operational capacity, safety, and labor resilience. Two key technologies are giving it legs:
The VLAM: Powering robot action63
Whole-body motion control (WBMC) + predictive learning
Earlier robotic systems controlled limbs independently, resulting in stiff, unnatural movement. WBMC treats the robot as an integrated system, optimizing all joints simultaneously for balance, fluid locomotion and manipulation. When WMBC is combined with predictive learning, robots gain the ability to anticipate future states, adjust trajectories in real time and recover proactively from disturbances. Imagine a humanoid robot navigating a crowded factory floor while carrying a tray of fragile components. It senses a worker stepping into its path, shifts its weight, pivots smoothly and maintains balance without dropping a single item. This approaches human-level dexterity and opens the door to robots performing complex and dynamic tasks.
Physical Intelligence64
$600 million
Startups that specialize in next-gen VLAMs to power robotics were venture-fund favorites in 2025
Dyna Robotics65
Flexion Robotics66
Sereact67
Scout AI68
(Series B)
$120 million
(Series A)
$50 million
$26 million
$15 million
(seed)
Researchers recently created autonomous robots smaller than a grain of salt — yet smart enough to sense, decide and act without human control. The microscopic machines, powered by light, promise capabilities straight from the pages of science fiction, such as monitoring individual cells inside our bodies or assembling tiny devices invisible to the naked eye.69
Microscopic machines
Heat, hands and human safety
Most humanoids still operate for only a few hours per charge.70 Heavier batteries undermine balance and mobility, while frequent recharging or swaps add downtime and operational complexity.71
Power is a binding constraint
The heat is on
During active work lithium-ion batteries can heat up to over 100°C within a minute.72 Controlling temperatures means limiting robot activity or adding cooling systems, both of which reduce efficiency and usable working time.
Hands in the air
Manipulation depends on fine motor control and tactile feedback. Human hands are capable of 27 degrees of freedom (DoF), and Tesla’s latest Optimus robot reaches 22 DoF,73 but the absence of tactile feedback means humanoids still can’t be trusted to handle fragile objects.74
Results in the balance
Stability isn’t passive. It requires continuous sensing and correction, and uses significant energy. 75 When power or control fails, robots fall — creating safety risks, asset damage and human intervention costs.76
The real world is dynamic
Humanoids face the same challenge as autonomous vehicles do: The physical world is ever-changing. Shifting environments demand AI models that integrate vision, language and motor control seamlessly; a capability still in its infancy. Human safety is a key limiting factor. As autonomy increases, tolerance for error collapses, turning edge cases into board‑level risks.
Adapting to change
Humanoids face the same challenge as autonomous vehicles: The physical world is ever-changing. Shifting environments demand AI models that integrate vision, language and motor control seamlessly — a capability still in its infancy. Robots must also ensure human safety, raising the technical bar even higher.
Businesses will shift from treating automation as a cost play to using embodied AI as a tool for operational resilience: extending shift coverage, improving safety in hazardous environments and supporting continuity during workforce gaps. Embodied AI changes robotics from an operational curiosity into a new class of financed, insured and risk-scored assets. It introduces questions that regulated institutions cannot defer: Who is accountable for autonomous physical actions? How are failures audited and insured? And how is risk priced when “AI error” causes physical harm?
Engineering the future, managing the risks
45464748 4950515253 54 5556 57 58 59 60 61 62 63 64 65 66 67 68 69 7071 72 73 74 75 76
mckinsey.comtech.conpr.comworldbank.data.comoecd.orgmorganstanley.commorganstanley.commorganstanley.commorganstanley.commorganstanley.comreuters.comeweek.comwww.nytimes.comcbinsights.comcbinsights.commarketsandmarkets.comnytimes.comnytimes.comwikipedia.comwww.therobotreport.comtechfundingnews.comtheaiinsider.techwww.pymnts.comwww.prnewswire.comwww.seas.upenn.edu www.simplexitypd.com www.simplexitypd.comwww.simplexitypd.comwww.teslarati.comwww.simplexitypd.comwww.simplexitypd.comwww.simplexitypd.com
Physical AI marks a change in where intelligence shows up, not just what it can do. After decades of automation built for narrow, predictable tasks, AI systems are beginning to operate in environments designed for humans — messy, dynamic and constrained by the real world. This isn't an incremental upgrade to robotics, but a threshold moment: when intelligence must contend with space, time, safety and consequence. The result is a new class of systems that bring AI out of abstraction and into lived environments.
Figures from CB Insights put investment in robotics at record levels, and increasing 74% year-on-year 58.
Robotics represents 9% of all venture capital funding59.
Together, these deployments and funding rounds signal a shift from experimentation to industrialization: robots are moving out of labs and pilots and into balance sheets, insurance contracts and regulatory scope.
The broader physical AI market is projected to grow almost 50% each year between now and 2032 60.
Privacy is no longer just about locking data away — it’s becoming a way to unlock new value. As regulation tightens and trust becomes harder to earn, progress depends on finding ways to analyze, share and learn from data without exposing it. Technologies like confidential computing and synthetic data are changing the equation, allowing organizations to collaborate and innovate without handing over sensitive information. The result is a shift from data ownership to data use — where proof replaces disclosure, and privacy becomes an engine for growth rather than a brake.
Collaborate, analyze, authenticate — without exposure
Privacy-enhancing technologies (PETs)
Privacy-enhancing technologies (PETs) use advanced mathematical techniques to let organizations analyze data without exposing sensitive information.
Together, these approaches turn privacy from a constraint into a capability — enabling institutions to collaborate across borders, deploy AI in sensitive domains like fraud prevention, and verify identity with less data and less friction. As PETs mature, they’re moving from point solutions to core infrastructure, especially in payments, financial services and AI-driven ecosystems — expanding what can be safely analyzed and shared at scale.
Confidential computing protects data during processing in secure hardware environments that keep workloads isolated, even in the cloud.
Personalized commerce without personal data
Privacy-enhancing technologies are moving from niche controls to core operating infrastructure in sectors where trust, regulation and data sensitivity are essential. The shift is not about hiding data, but about unlocking collaboration, analytics and AI without expanding risk.
Sharing intelligence, not raw data
Institutions can share threat signals without exposing customer data — improving detection while reducing regulatory and reputational risk.
Cybercrime prevention
A shopper opens a retailer app and sees offers aligned to her tastes, budget and loyalty status — without the retailer ever compiling a centralized personal profile. Instead, the shopper’s phone keeps a local “preference wallet.” When the retailer asks, “Is this person eligible for this offer?”, proof replaces disclosure. For example, to verify age, the shopper proves “over 21” without sharing a birthdate. For deeper analysis, models run inside locked-down “secure rooms” where data remains encrypted even while in use.
Organizations can confirm claims — from age and eligibility verification to travel and healthcare services — without building new surveillance infrastructure or hoarding identity data.
Identity, eligibility and access
Open finance pushes this model further: regulated, permission‑based access to financial signals without bulk data transfer. When combined with PETs, open finance shifts from “sharing data” to “sharing decision‑grade signals” — enabling richer underwriting, personalization and fraud prevention without expanding surveillance or data hoarding.
Open finance
Lenders can evaluate risk using broad signals while keeping raw data local, encrypted or abstracted — expanding access and inclusion while limiting exposure.
Credit, underwriting and pricing
Privacy tech goes mainstream
Secure execution environments are now standard across major clouds, allowing sensitive workloads to run without exposing underlying data. Three quarters of organizations are already using or piloting these approaches.77
Confidential computing becomes standard
Clean rooms, multiparty computation and federated learning enable institutions to generate shared insights — from fraud signals to overlap analysis — without handing over raw data.78
Data sharing without data transfer
Zero knowledge systems allow organizations to verify claims (eligibility, age, authority) without collecting or storing personal information.79
Proof replaces disclosure
By replacing sensitive identifiers with programmable, context‑bound tokens, institutions can let systems authenticate, authorize and transact without ever re‑exposing the underlying data. In practice, tokenization turns privacy into an execution layer — enabling auditability, revocation and control at machine speed.
Tokenization becomes the execution layer
Secure AI and collaborative analytics
Confidential and encrypted computing make it possible to run AI on regulated data — such as payments, health records and identity — without centralizing or exposing it. At the same time, privacy-preserving techniques like differential privacy help protect individuals while preserving analytical usefulness. (Differential privacy works by adding small, random variations — called “noise” — to calculations. This noise masks individual records while still revealing accurate patterns about the group.) In one example, a multi-hospital collaboration trained a diagnostic model on 590,000 chest x-rays using differential privacy and achieved accuracy nearly identical to non-private models.100
Confidential and encrypted computing
Federated learning lets models train across distributed datasets, while clean rooms provide controlled environments where multiple parties can link or analyze sensitive datasets under strict rules. These approaches are especially valuable when partners need measurable results — such as understanding how much their customer bases overlap — without revealing proprietary or personal information. Mastercard uses Databricks clean rooms to enable multiple parties to run approved analyses and generate shared insights without directly accessing or copying each other’s raw data.101
Federated learning and clean rooms
Advantage shifts from owning the most data to enabling the most trusted computations. Participation in digital ecosystems will increasingly depend on provable safety — enforced through cryptographic controls, governance and auditability. Power will concentrate around platforms that set clear rules for safe participation, enable privacy-preserving collaboration, and evolve cryptographic defenses as threats change. In this new model, proof replaces disclosure as the foundation of trust — transforming privacy from a compliance obligation into a strategic differentiator. Banks can use PETs to share threat intelligence, improve fraud detection and support cross‑institution analytics while reducing compliance burden. Businesses can deliver hyper‑personalized services without storing sensitive profiles, thus strengthening trust and reducing regulatory exposure.
Enhancing performance
Complexity and convergence
Interoperability & auditability
Interoperability is the other bottleneck. Standards for encrypted analytics and ZKPs are still forming, and real-world implementations can be difficult to evaluate without specialized expertise. That fragmentation slows ecosystem adoption: Developers can’t easily swap libraries or combine vendor components and auditors can’t get simple, consistent evidence that the system is behaving as intended.106 NIST’s Workshop on Privacy-Enhancing Cryptography107 (WPEC) spotlights active work across MPC, ZKPs and fully homomorphic encryption — and underscores that the field is still converging on practical, deployable patterns. Meanwhile, systematic surveys of ZKP frameworks highlight usability and benchmarking gaps that make it harder to choose, compare and operationalize tooling across stacks.108
Stack complexity
Most deployments won’t rely on a single privacy-enhancing technology. Instead, they’ll combine clean rooms, MPC, ZKPs, federated learning and perhaps homomorphic encryption, which allows computations to be performed directly on encrypted data without ever decrypting it. The hard part is deciding which method to use, then managing performance, cryptographic keys, policies and failure modes across a mixed stack.103 Industry frameworks like Semantic Kernel’s hybrid orchestration104 and HyScaleFlow106 are early attempts to manage multi-layer orchestration across cloud and edge environments.
www.linuxfoundation.orgmpc.cs.berkeley.edublog.ju.comwww.nist.govwww.forbes.comwww.databricks.combetterdata.ai ibm.com
7778798081828384
Synthetic data addresses data constraints by reproducing the statistical patterns of real world data when live datasets are scarce, fragmented or sensitive. It accelerates model development, improves performance and enables AI training in regulated environments, while reducing reliance on live customer or transaction data. Digital twins go further, by modeling behavior rather than records. In banking, they simulate how payment flows, fraud controls, credit portfolios and liquidity systems interact under stress, allowing institutions to test attacks, regulatory shocks and operational failures without touching production data. Together, synthetic data and digital twins enable safer testing, faster learning and stronger resilience by design.
Synthetic data and digital twins
By 2028, 80% of data used for AI is expected to be synthetic.102
Two forces lead this change...
Decision receipts and clearly enforced constraints make automated decisions easy to explain, audit and defend.
AI governance and accountability
What’s changing
Why it’s better
Synthetic data addresses constraints by reproducing the statistical patterns of real‑world data when live datasets are scarce, fragmented or sensitive. It accelerates model development, improves performance and enables AI training in regulated environments, while reducing reliance on live customer or transaction data.83 Digital twins go further, by modeling behavior rather than records. In banking, they simulate how payment flows, fraud controls, credit portfolios and liquidity systems interact under stress, allowing institutions to test attacks, regulatory shocks and operational failures without touching production data. Together, synthetic data and digital twins enable safer testing, faster learning and stronger resilience by design.
Spotlight: Synthetic data and digital twins
Less exposed data means lower breach impact, reduced compliance burden and fewer reputational tail risks.
Risk and cost fall together
Institutions can collaborate across borders, sectors and partners without renegotiating trust or compliance every time.82
Ecosystem participation expands
Regulated industries can deploy advanced analytics and automation without centralizing sensitive information.81
AI scales where data can’t move
By 2028, as much as 80% of data used for AI could be synthetic.84
Mandatory cryptographic re-engineering is becoming a forcing function to embed privacy by design across payments, identity and cloud architectures.80
Post‑quantum upgrades accelerate adoption
The way that products are conceived, designed and built is being transformed. As AI shifts from helping teams build products to actively shaping them, speed becomes structural, not incremental. Products stop being delivered and start being run — continuously updated, adjusted and improved in response to real‑world signals. In this world, success belongs to teams that can learn and act faster than the market around them.
Product enhancement at agent speed
A product manager for a SaaS company that serves mid-sized retailers gets an alert from her agentic product experience platform: During the holiday rush, retailers are attempting — but failing — to export their transaction data, the sales and payment records they need for accounting, for tax reporting and to power ERP systems. Working from support tickets, app reviews and usage patterns, the AI agent pinpoints the problem: Files are too large and downloads stall under peak loads. The agent proposes a fix: break large files into smaller, restartable portions — compressed and encrypted — to make high-volume exports reliable. After automated tests, the agent assigns work across developers and coding agents. Two days later, the improved export feature ships. What once took weeks now happens in days — just in time for retailers to close their books.
Accelerated cycles and “unfinished” products
Development cycles will shrink, products will adapt in real time and engineers will focus more on strategy than execution. Companies that master agentic workflows will deliver faster, personalize products in the flow, and harness speed as a structural advantage. The idea of a “finished product” will give way to continuous iteration and live adaptation.
Compressing timelines
Software development has long been slowed by repetitive testing cycles and deployment bottlenecks. Only about a third of organizations deliver projects on time.85 Agentic AI changes that. By planning, executing and optimizing workflows autonomously, agents reduce errors, accelerate iteration and elevate engineers to orchestrators.86 The impact goes beyond speed. Smaller companies can use agentic AI to overcome resource limitations and compete with larger rivals.87 Agentic AI can also personalize products in real time, drawing on live data and user behavior.88 Imagine robotics systems that self-tune to improve performance or games that adapt to each dynamically.89
Beyond vibe-coding
The explosion of natural language coding tools (“vibe coding”) is visible 90 — but it’s not the core change. The real shift is structural · Agents plan work, not just execute tasks · Teams of agents collaborate, not single copilots · Agents operate inside real workflows — testing, monitoring, deployment In effect, organizations are building AI delivery teams that operate continuously across the development lifecycle, with humans setting direction, approving changes and governing risk.
92% of US developers use AI coding tools daily but report spending more time reviewing the code for errors 91
Growth in % of companies using agents for code116 January 2025 to May 2025
Driving digital DevOps
These advances mark a shift from reactive copilots to agents that plan, act and adapt across the development lifecycle.
Agents break down objectives, plan tasks and execute end-to-end workflows with minimal human input — even scanning entire codebases to find and fix issues proactively.117
Tackling multi-step tasks
Agents integrate with tools teams already use — tests, build systems, deployment pipelines — to review code, write tests, run checks and sometimes ship changes with fewer handoffs. Some continuously optimize security testing and release processes.118
Embedding agents in workflows
Companies deploy specialized agents — planners, coders, testers, monitors.119 Microsoft promotes multi-agent systems where an orchestrator delegates to specialists the completion of multi-step, multi-system work.120
Agents delegating to agents
Agentic product development alters competitive dynamics in a subtle but powerful way. Smaller teams can Identify issues faster Ship fixes sooner Personalize behavior continuously Large incumbents still have resources — but speed no longer scales linearly with size.This compresses traditional scale advantages and creates new pressure: incumbents must match startup‑level iteration speed without losing control. The best will harness autonomy to governance — clear accountability, access controls and safeguards that support deeper automation.
Big loses its edge
Trusting the bots
Security and trust
Agents with deep system access expand an organization’s attack surface. Without strict access controls and sandboxing, this can introduce vulnerabilities during updates.
Control and accountability
When agents trigger deployments or change production behavior, accountability blurs. Clear governance is essential to assign responsibility and prevent costly errors.
Capabilities are advancing faster than most organizations’ governance and operating models — creating uneven adoption and new risks.
Reliability in edge cases
Agents perform well in structured environments but can fail in legacy systems or with ambiguous tasks. Human oversight remains critical for non-routine work.
Data and environment quality limitations
Agents rely on accurate data and documentation. Poor inputs can lead to flawed or unsafe actions, making robust data governance a prerequisite for autonomy.93
Organizational readiness
Teams often pilot agentic workflows faster than they adopt the processes, roles and incentives to manage them. Without updated operating models, autonomy can accelerate technical debt. Research shows agent-authored pull requests still need human refinement — reinforcing the need for hybrid oversight rather than blind automation. 94
www.wellingtone.comwww.gartner.comwww.mckinsey.comlinkedin.comstudio.ey.comloveable.devwikipedia.commedium.comwww.techradar.comarxiv.org
8586 8788899091929394
AI has long played a supporting role in product development — suggesting code snippets, flagging bugs and speeding routine tasks. Now, generative AI can create code, tests and designs on demand. In the next chapter, AI agents will act, not wait for instructions. Over the next three to five years, they’ll increasingly plan, coordinate and ship under human oversight.
Percentage of all new code is now AI-generated 92
Hybrid computing opens a high-performance path
Quantum computing has long promised breakthroughs — and long disappointed on delivery. The shift now underway is not about replacing traditional systems, but about pairing quantum with them. By combining quantum’s ability to explore vast possibilities with classical computing’s stability and control, progress becomes practical rather than theoretical. This hybrid approach reframes quantum not as a moonshot, but as a tool that can finally start solving real problems. For business leaders, this marks a change: quantum is no longer “someday.” It’s becoming a new kind of co-processor for the tasks that classical machines alone can’t handle.
Case study
Reducing false positives in fraud detection
A research partnership between Mastercard and Oxford Quantum Circuits has found hybrid quantum-classical machine learning can improve the accuracy of fraud detection.95 The hybrid system would mean fewer legitimate transactions being wrongly declined (false positives), so addressing a major business cost.
A quantum-classical marriage
On its own, quantum computing still faces major constraints: instability, high error rates, extreme hardware requirements, and limited applicability to everyday workloads. It cannot (yet) power your cloud, run your payments systems, process customer data, or handle cybersecurity at scale.
Unleashing quantum potential
These are domains where small improvements yield enormous economic value — and where classical systems alone hit limits.
Seeding the ecosystem
Tech leaders are prioritizing hybrid architectures
Nvidia is offering NVQLink to connect quantum processors and classical accelerators with high bandwidth and low latency 99, and has launched an Accelerated Quantum Center to explore the interaction between quantum, AI, and HPC. 100
Google has built hybrid simulators that model quantum behavior on classical hardware, enabling research without the instability of today’s qubits. 98
early-stage quantum-classical services
Major providers now offer experimental early-stage hybrid quantum-classical services as components of their cloud services.Among them are:· Microsoft via Azure Quantum· Amazon via AWS Braket· IBM via IBM Quantum
Quantum software ecosystem growth
Open-source frameworks like Qiskit, CUDA-Q and PennyLane are making quantum programming simpler by providing standardized development tools.
Integrated quantum-classical software development kits are now bundled with high-performance environments, making deployment easier.
Building out an infrastructure
IBM's new Nighthawk quantum processor can handle calculations 30% more complex than those of earlier models.91
New hardware
A quantum computer running a Google algorithm mapped a molecule’s structure 13,000 times faster than a classical system, an achievement Google cites as evidence of “quantum advantage,” where quantum systems outperform classical ones in practical tasks.92
New algorithms
Hybrid systems demand fast, high-bandwidth data exchange between quantum and classical components. Technologies like Nvidia’s NVQLink deliver this capability with sub-second latency.
New bridging solutions
French start-up Alice & Bob has maintained the stability of qubits (the basic units of quantum computing) for more than an hour, a big step toward overcoming quantum systems’ fragility. By comparison, most quantum systems sustain “coherence” for only a fraction of a second.93
New stability
Australian firm Quantum Brilliance has developed a quantum system that operates at room temperature,94 eliminating the need for the near-absolute-zero temperatures of other quantum environments. This innovation could reduce costs and simplify deployment outside of lab conditions.
New practicality
Hybrid quantum-classical systems won’t replace existing computing, but they will reshape what’s possible in industries such as finance, logistics, energy, and life sciences. For financial institutions that may include real-time portfolio optimization under volatile market conditions, ultra-fast risk modelling, more precise fraud detection and anomaly scoring, and accelerated stress-testing for regulatory compliance. For other enterprises quantum-classical tools could unlock dynamic supply-chain rerouting, advanced materials design, highly accurate climate simulators, and enable the development of new drugs and biomaterials. Frontier adopters will gain an edge not just through speed, but through better decisions, lower costs, and new classes of products and services that competitors can’t match.
A high-performance path
Connections, costs and security concerns
Geopolitical pressure
Export controls and national security concerns are intensifying, raising the risk of fragmented standards, slowed collaboration, and uneven access. 101
Integration complexity
True hybrid systems need sub-microsecond communication between classical and quantum components, which remains technologically difficult at scale.
High financial barriers
Cryogenic systems, specialized materials, and error-correction research make infrastructure expensive, concentrating progress among the best-funded organizations.
arxiv.org/absnewsroom.ibm.comnewsroom.ibm.comwwww.gizmodo.comnvidianews.nvidia.comthequantuminsider.com quantum.gov
9596979899100101
IBM is pioneering “quantum-centric supercomputing,” 96 linking quantum systems with classical supercomputers like RIKEN’s Fugaku and developing open ecosystems with AMD. 97
Quantum computing has long promised breakthroughs — and long disappointed on delivery. The shift now underway is not about replacing traditional systems, but about pairing quantum with them. By combining quantum’s ability to explore vast possibilities with classical computing’s stability and control, progress becomes practical rather than theoretical. This hybrid approach reframes quantum not as a moonshot, but as a tool that can finally start solving real problems.
But hybrid systems change the equation. By offloading only the hardest optimization, simulation, and molecular-scale problems to quantum, and letting classical systems orchestrate everything else, enterprises can tap quantum benefits years before fully fault-tolerant machines exist. This synergy is what brings quantum into the realm of commercial value; not by replacing classical computing, but by extending it.
Drug discovery in months,not years
A pharmaceutical research consortium uses quantum-classical hybrid computing to pursue a treatment for a rare cancer. Classical systems orchestrate the project’s workflow, performing tasks like breaking down molecular simulations into quantum-suitable problems and correcting errors. Meanwhile, quantum algorithms model the complex molecular interactions between candidate drugs and cancer cell proteins, processing billions of configurations simultaneously — a task that far exceeds what classical computers can do. The result: A promising drug candidate is identified in months rather than years.
Even with progress, hybrid quantum computing faces substantial obstacles:
Designing a trusted future
Taken together, these seven signals point to a structural shift in how advantage is created and sustained.
Intelligence is becoming distributed and embodied; autonomy is expanding but only where it can be governed and explained; efficiency is becoming a constraint; and data is valuable only when it can be used without being exposed. These are fast becoming the minimum conditions for competitiveness, resilience, and trust. As intelligence embeds itself across devices, agents and infrastructure, leadership moves upstream: from deploying tools to deliberately designing the environments in which AI is allowed to decide, act and transact. Leadership must move upstream—from deploying tools to deliberately designing the environments in which AI is allowed to decide, act and transact. Organizations that hesitate will not simply move slower; they risk inheriting rules, cost structures and risk profiles set by others. Three questions now demand near‑term answers from leaders: Where will we allow agents to decide and act autonomously—and where will we draw hard, enforceable limits? Which constraints will bind first as intelligence scales—energy, cost, regulation or trust—and are we engineering for them now? Which parts of our organization can learn and adapt continuously today, and which are structurallyfalling behind? These aren’t governance checklists. They are competitive fault lines. For leaders willing to act early, this moment offers a rare opportunity: to shape how intelligence operates, earn trust by design, and turn emerging constraints into durable sources of competitive advantage.
“Intelligence is no longer just a tool we wield, it’s becoming the environment we operate in. As AI spreads across devices, agents and infrastructure, leadership becomes an exercise in shaping trusted ecosystems, not managing workflows. The opportunity here is defining the foundations for the next generation of products and services."
Ken Moore | Chief innovation officer at
Mastercard powers economies and empowers people in 200+ countries and territories worldwide. Together with our customers, we’re building a resilient economy where everyone can prosper. We support a wide range of digital payments choices, making transactions secure, simple, smart and accessible. Our technology and innovation, partnerships and networks combine to deliver a unique set of products and services that help people, businesses and governments realize their greatest potential.
www.wellingtone.com www.gartner.com www.mckinsey.com www.linkedin.com www.studio.ey.com www.lovable.dev www.jellyfish.co www.jellyfish.co about.gitlab.com www.testingxperts.com www.fromdev.com www.microsoft.com www.techradar.com arxiv.org
108 109 110 111 112 113 114 115 116 117 118 119 120 121