The AI Value Capture Paradox in 2025

The “AI Value Capture Paradox” refers to the disconnect between the explosive growth of AI capabilities and the actual profits earned from them. In 2025, many companies are racing to build AI features and platforms, but the real value capture often appears concentrated elsewhere – in the infrastructure, data ecosystems, and consulting services that support AI, rather than the AI software products themselves.

Companies focused on AI feature development (e.g. offering an AI-powered app or service) often incur high variable costs to serve each customer or query, chiefly due to cloud compute usage and data processing. On the other hand, those in the infrastructure or data-management layer operate with high upfront costs but more scalable cost leverage. The result is a paradox, the AI Value Capture Paradox: AI application providers find their operational costs scaling nearly linearly with usage, whereas infrastructure providers enjoy economies of scale and can achieve better margins as utilization grows.

In this report, Foray Consulting, LLC analyzes where the AI industry’s revenue and margins are accruing, how investors and enterprises are responding, and what it means for competitive strategy. Spoiler alert: the true winners in 2025 will be those selling “picks and shovels” (infrastructure and expertise) rather than those digging for gold (AI applications). In our conclusion, we will synthesize these insights into several key takeaways for different types of AI stakeholders.

Revenue Concentration & Profit Margins

Where are the big bucks in AI being made? Industry data suggests that the highest margins and revenue streams in AI are concentrated at the infrastructure layer – chips, cloud services, and data centers – rather than the end-user AI applications. For example, NVIDIA, the leading supplier of AI GPUs, saw its data-center revenue and profit surge dramatically in the past year. In a single quarter, NVIDIA doubled its sales (to $13.5 billion) and grew net profit 843% to $6.2 billion, thanks to AI demand. This explosion in profit was possible because NVIDIA raised its prices without a similar rise in costs – its gross margins rocketed from ~43% to over 70% in a year. With an estimated 90%+ market share in data-center AI chips, NVIDIA can “charge what it wants” and is likened to an oil producer during a shortage. In short, the chipmakers and cloud providers enabling AI are “cleaning up” on profits, while many AI software ventures struggle to monetize.

By contrast, companies building AI models and features often face lower margins and high costs. Running advanced AI models requires enormous cloud computing resources, eroding profit margins for AI software providers. For instance, Anthropic, a notable AI startup, had an estimated gross margin of only 50–55%, far below the ~77% average for cloud software companies. Even long-term, insiders expect such AI model providers might only reach ~60% gross margin – still markedly thinner than traditional software businesses. These lower margins reflect the hefty cost of AI compute, data, and R&D. One venture analysis noted that in 2023 the AI industry spent about $50 billion on NVIDIA chips to train models, yet generated only $3 billion in revenue from those models. This 17:1 cost-to-revenue imbalance illustrates how current AI offerings are expensive to build and run relative to the revenue they bring in. In fact, those numbers likely understate total costs since they exclude expenses like supporting hardware, electricity, and the human labor for model training. The economics are so skewed that an analyst quipped “no one else in the genAI business is making any serious money” from the boom besides the infrastructure providers.

Companies focused on AI features may need to find special niches or complementary revenue streams because the core AI service itself risks becoming a low-margin commodity unless they can break out of this pattern.

In summary, today’s AI value chain is top-heavy: foundational infrastructure vendors (chipmakers, cloud platforms) enjoy fat revenues and margins, while many pure-play AI software companies operate on thin profits or at a loss. This paradox has big implications – it suggests that selling the “picks and shovels” of AI (GPUs, cloud computing, data pipelines) may be far more lucrative than selling AI-driven features or apps. Companies focused on AI features may need to find special niches or complementary revenue streams because the core AI service itself risks becoming a low-margin commodity unless they can break out of this pattern.

Investor Trends (Where the Money Flows)

Investment activity in 2023–2025 reflects this paradox, with significant capital pouring into AI infrastructure and core technology, even as excitement around consumer-facing AI applications runs high. Early in the generative AI wave, venture capital and Big Tech firms did invest heavily in model-centric startups (for example, Microsoft’s multi-billion stake in OpenAI and Google’s backing of Anthropic). However, much of this investment ultimately reinforces the infrastructure ecosystem. Notably, deals like Amazon’s $4 billion investment in Anthropic came with strings attached: Anthropic agreed to make Amazon its primary cloud and use AWS’s specialized AI chips. In effect, such investments channel AI startups’ present and future workloads onto the investor’s infrastructure, ensuring the cloud provider captures value as the startup scales.

Meanwhile, venture and equity funding in “picks and shovels” has surged. In 2023–24, there was a boom in financing for AI infrastructure startups – from cloud compute specialists to data management and MLOps tools. For example, CoreWeave, a cloud provider focused on AI GPU clusters, raised $650 million at a $23 billion valuation in late 2023. Major investors (including hedge funds and even traditional tech companies like Cisco) have piled into such firms, betting that serving the AI rush is a winning play. Similarly, the AI database and data platform segment (vector databases, feature stores, etc.) saw a flurry of startup funding, reflecting demand for optimizing AI data pipelines. In contrast, funding for stand-alone AI application startups has become more selective by 2024, as investors scrutinize how those apps will differentiate themselves or turn a profit beyond the big cloud platforms’ offerings. There’s a growing realization that many AI features might just be subsumed into existing platforms (for instance, productivity suites or CRM software adding generative AI). This has tempered the initial gold rush of “ChatGPT-for-X” startups and refocused some investor attention on enabling technologies that all AI builders will need (efficient model training tools, privacy and compliance solutions, etc.).

There’s a growing realization that many AI features might just be subsumed into existing platforms. This has tempered the initial gold rush of “ChatGPT-for-X” startups and refocused some investor attention on enabling technologies that all AI builders will need.

Industry analysis indeed points to applications built on commoditized models benefiting end-users, while base model providers struggle with value capture. IoT Analytics, in its 2025 outlook, predicts that “end users and AI application providers” will be the biggest winners, while proprietary model providers could lose out if open-source, low-cost models proliferate. The recent debut of a powerful open model (DeepSeek R1) underscored this dynamic – it rattled stock prices of major AI chip and cloud companies, as the market realized cheaper models could disrupt the incumbents’ high-priced offerings. Investors are now closely watching whether AI value will shift to those who implement and integrate AI (in industry-specific workflows, enterprise software, etc.) as models become more available. In other words, the scarce commodity may no longer be the AI algorithms themselves, but rather the expertise and infrastructure to apply them effectively at scale. This is where smart money is increasingly looking.

In summary, investor trends show heavy bets on infrastructure optimization and AI enablement, aligning with the notion that sustainable value lies beneath the flashy AI apps. The funding landscape is evolving from “fund the next cool demo” toward “fund the picks, shovels, and plumbing that everyone will need.” Companies in the AI space (and their backers) are realizing that competitive moats might come from owning proprietary data, scalable infrastructure, or distribution channels – not just from model novelty.

Cost Structures: AI Infrastructure vs. AI Features

The economics of running AI at scale have laid bare a striking contrast in cost structures. Companies focused on AI feature development (e.g. offering an AI-powered app or service) often incur high variable costs to serve each customer or query, chiefly due to cloud compute usage and data processing. On the other hand, those in the infrastructure or data-management layer operate with high upfront costs but more scalable cost leverage. The result is a paradox: AI application providers find their operational costs scaling nearly linearly with usage, whereas infrastructure providers enjoy economies of scale and can achieve better margins as utilization grows.

For instance, many software firms that added generative AI features saw their cloud bills skyrocket in 2023–2024. A survey of IT and finance leaders found that cloud spending is up ~30% on average largely due to AI workloads, and fully 72% of respondents said generative AI-driven cloud costs are becoming “unmanageable”. This indicates that without careful cost control or monetization, offering AI features can quickly erode profitability – a new AI feature might delight users, but behind the scenes it could be chewing through expensive GPU time. The cost of inference (running AI models) and maintaining AI infrastructure (storing vector indexes, retraining models with new data, etc.) adds ongoing expenses that traditional software didn’t have at this scale. Until those costs come down or are passed on to customers, many AI product teams face gross margin pressure – as we saw, even top AI model providers are operating at ~50% gross margin partly due to hefty cloud and datacenter expenses.

In contrast, infrastructure-layer businesses have cost structures that can lead to healthier margins over time. A cloud provider or a data center operator makes huge capital investments in servers, chips, and networks up front (depreciated over years), but then can multitenant those resources across many clients. If managed well, the incremental cost of serving each additional AI customer is far lower than that customer’s proportional share of revenue – meaning higher gross margins. For example, once a cloud platform builds an AI supercluster, adding one more customer’s workload mostly consumes electricity and maintenance, not new capital – allowing the cloud provider to mark-up usage and earn a profit. This is why cloud vendors can charge premium rates for AI instances. (Notably, those rates have been so high that many enterprises are questioning the cloud model – more on repatriation below!) Chipmakers similarly have high R&D and fab costs to design advanced AI chips, but once produced, each chip sold carries a strong margin (as evidenced by Nvidia’s 70%+ gross margin in the AI frenzy).

Providers of AI infrastructure (GPUs, cloud, data platforms) can amortize costs and often charge each client enough to make a healthy profit, whereas providers of AI features/apps often pay those charges, squeezing their own margins.

Another dimension is cost predictability and control. AI feature developers often face unpredictable costs because usage can spike (e.g. a viral AI feature leading to a cloud bill shock). In contrast, owning infrastructure or using fixed-capacity systems gives more cost certainty. This is driving some companies to reconsider how they allocate AI workloads. In one study, 95% of IT leaders said they plan to repatriate some of their AI workloads from public cloud to private infrastructure, moving on average ~47% of those workloads off the public cloud. The primary reason cited is cost predictability and savings – by bringing AI in-house, organizations can shift from variable cloud OpEx to fixed-cost investments (servers they own) and optimize those for their specific needs. Essentially, the high cost structure of cloud AI is pushing enterprises to seek cheaper in-house setups, if they have sufficient scale to justify it.

In summary, the cost structure gap is a core piece of the AI value capture puzzle. Providers of AI infrastructure (GPUs, cloud, data platforms) can amortize costs and often charge each client enough to make a healthy profit, whereas providers of AI features/apps often pay those charges, squeezing their own margins. Until technology advancements (e.g. more efficient models, cheaper hardware) or business model innovations (e.g. usage-based pricing to customers, or proprietary chips by AI startups) change this equation, the real value will continue accruing to those who optimize and own the expensive parts of the stack rather than those who simply consume them to deliver an end-user feature. AI companies must plan for these costs in their strategy – whether that means vertical integration (owning more of their compute stack), clever partnerships, or focusing on high-value use cases that can bear a higher cost of delivery.

Talent Acquisition Patterns

Talent trends in 2025 also reflect where companies believe the value lies – and it’s telling that organizations are aggressively hiring for AI infrastructure, data, and strategy roles, not just model researchers or front-end AI developers. Enterprise demand for AI expertise has led to a boom in AI consulting and integration skills. Major consulting firms are significantly expanding their AI workforce: for example, Accenture is doubling its data and AI staff to 80,000 people and investing $3 billion to meet client demand for AI solutions. This massive hiring spree by a consulting giant underscores that companies are willing to pay for expertise to implement AI – indeed, over 50% of large enterprises are already using AI consulting services to guide their projects. The global market for AI consulting is growing ~39% annually and is projected to reach an astounding $630 billion by 2028, indicating that “brains for hire” may capture a significant portion of AI’s economic value. In other words, instead of every enterprise trying to build AI magic in-house, many are channeling funds to specialists (external or internal) who can tailor AI to their business – a service for which consultancies command high billable rates (and generally healthy margins).

The global market for AI consulting is growing ~39% annually and is projected to reach an astounding $630 billion by 2028, indicating that “brains for hire” may capture a significant portion of AI’s economic value.

Within tech companies themselves, hiring patterns have shifted toward infrastructure and applied AI roles. Cloud providers and platform companies are scooping up talent in AI hardware engineering, distributed computing, and AI ops (MLOps) to bolster their back-end capabilities. There’s intense competition (and high salaries) for GPU engineers, AI compiler experts, and data architects who can optimize large-scale AI workloads. At the same time, enterprise IT departments are hunting for AI platform architects and prompt engineering specialists who can integrate off-the-shelf AI models into business processes. The net effect is that talent specializing in optimization, integration, and cost-effective scaling of AI is highly prized – aligning with the idea that value lies in how AI is delivered, not just the concept of AI itself.

By contrast, hiring for pure AI research (e.g. model scientists) remains active at the biggest AI labs (Big Tech and well-funded startups), but smaller firms are cautious about hiring expensive research talent that doesn’t directly drive near-term product value. In fact, some AI startups have pivoted to focus on engineering efficiencies and go-to-market roles to actually capture value from existing AI tech, rather than invent new algorithms. We also see traditional software engineers upskilling to become “AI engineers,” combining software development with AI API utilization – these roles often emphasize integrating AI into products in a cost-effective way. Another trend is internal “AI Centers of Excellence”: companies appoint lead AI architects or “AI champions” to coordinate efforts, rather than diffuse hiring of dozens of researchers. This indicates companies want practical, ROI-driven AI talent – those who can liaise with vendors, choose the right models or platforms, ensure compliance, and deliver business results.

Companies want practical, ROI-driven AI talent – those who can liaise with vendors, choose the right models or platforms, ensure compliance, and deliver business results.

In summary, talent flows mirror the value capture trend: growth in consultants and integrators, strong demand for infrastructure and MLOps experts, and a more measured approach to hiring pure AI feature builders. Organizations seem to be asking: “Who can help us actually realize value from AI in our context?” – and filling those roles accordingly. For AI-focused companies, this means competition for the right talent is fierce and expensive; many will partner with larger firms or consultancies to fill gaps. For individuals in the field, skills in AI strategy, infrastructure, and applied data science are a ticket to the most in-demand (and well-compensated) jobs, reflecting their central role in capturing AI’s business value.

Cloud Repatriation & Infrastructure Strategies

One striking trend driven by the economics of AI is “cloud repatriation” – enterprises moving AI workloads off public clouds back to on-premise or hybrid infrastructure. After years of migration to cloud, some companies have discovered that renting AI compute at scale can be exorbitantly costly and potentially risky for control of data. Surveys and reports now show a clear shift: a Barclays CIO survey found 83% of enterprises plan to repatriate workloads from public to private cloud, and a recent study showed 95% of IT leaders intend to move at least some resources off public clouds, targeting nearly 47% of their cloud workloads to come back in-house on average. This reversal is largely a reaction to spiraling cloud costs for AI and the desire for greater control.

83% of enterprises plan to repatriate workloads from public to private cloud [and] 95% of IT leaders intend to move at least some resources off public clouds... This reversal is largely a reaction to spiraling cloud costs for AI and the desire for greater control.

AI workloads in particular are a catalyst for repatriation. Training and running AI models in the public cloud incurs not only high instance fees but also data egress charges and performance uncertainties. As noted, AI-heavy cloud bills have become unsustainably high for many. By bringing these workloads to private cloud or on-prem data centers, companies seek to achieve a fixed-cost model and optimize hardware usage. They can invest in their own GPU servers (capital expense) and then utilize them at high capacity, rather than pay usage-based rates that include cloud provider markups. Benefits of this move include:

  • Cost Predictability & Savings Owning infrastructure allows fixed depreciation costs and avoids surprise bills. Firms report potential savings of 50% or more over equivalent cloud costs by tuning hardware to their specific AI tasks.

  • Performance optimization: Dedicated on-prem AI clusters can be fine-tuned for specific workloads, potentially yielding better performance. Companies can configure hardware and software stack (including emerging specialized AI chips) to their advantage.

  • Data security and compliance: In regulated industries (finance, healthcare, government), keeping sensitive data and AI processing in a private environment aids compliance. Many organizations feel more secure when data doesn’t leave their controlled environment.

  • Hybrid flexibility: Many are not abandoning cloud entirely, but adopting hybrid architectures – e.g. run steady, predictable AI jobs on-prem, and burst to cloud for spiky demand. This optimizes cost while retaining elasticity when needed.

Even vendors are responding to this trend. Traditional enterprise tech firms (like HPE, Dell) are rolling out “AI in a box” on-prem solutions: integrated packages of GPUs, networking, and software that promise cloud-like ease of use but within one’s own data center. HPE’s CEO noted that enterprises are moving from AI experimentation to adoption rapidly, with a focus on private cloud as an essential component for reasons of governance, security and cost. He predicted an on-prem enterprise AI market growing at 90% CAGR, calling out that “very few enterprises will build their own large models; most will fine-tune existing models on their unique data” in a private/hybrid setup. In effect, many companies want to leverage AI, but without handing over all the keys (and checks) to the public cloud providers.

Many companies want to leverage AI, but without handing over all the keys (and checks) to the public cloud providers.

It’s important to note that hyperscalers themselves are facilitating hybrid and on-prem options – for instance, cloud vendors offer “bring your own GPU” managed services or sell stripped-down versions of their AI cloud frameworks for on-prem use, knowing that not every workload will live in their data centers. They’d rather keep customers in the fold via hybrid offerings than lose them entirely. This dynamic underscores a strategic point in the value capture equation: Cloud providers dominated early AI value capture, but customers are pushing back to retain more value by owning infrastructure. As a result, we may see a more balanced landscape where some AI value stays with enterprise IT departments (who operate cost-efficient private clouds), while hyperscalers focus on truly elastic or managed services that justify their premium.

Cloud providers dominated early AI value capture, but customers are pushing back to retain more value by owning infrastructure.

For AI-focused companies and investors, cloud repatriation trends signal that business models purely reliant on expensive cloud compute may face margin improvement opportunities (if they too can optimize or relocate workloads). It also suggests new opportunities in enabling on-prem AI – from software that helps manage AI clusters, to companies offering “AI colocation” services (data center providers catering specifically to AI hardware needs). Overall, the rise of hybrid and on-prem strategies is about recapturing value: enterprises are essentially saying, “If the cloud is eating our AI budget, we need to own part of that infrastructure and eat our own computing to save costs.”

Market Share & Competitive Dynamics

The competitive landscape in AI as of 2025 is defined by a mix of entrenched tech giants fortifying their dominance and strategic plays by newer entrants to carve out niches. The paradox of value capture is evident here: many of the firms reaping outsized rewards are those that were already dominant in adjacent fields (chips, cloud, enterprise software), whereas newer AI-specific players often find themselves dependent on those incumbents for critical resources or distribution.

  • Cloud & AI Platforms: The “Big Three” cloud providers – Amazon Web Services, Microsoft Azure, and Google Cloud – have preserved or even expanded their market shares by becoming the go-to platforms for AI services. They leveraged their war chests to invest in AI (e.g. Microsoft’s partnership with OpenAI, Amazon’s deals with Anthropic) and integrate advanced AI APIs into their offerings. This has a reinforcing effect: enterprises stick with known cloud vendors for AI, and those vendors then capture the incremental revenue. These firms also rapidly rolled out proprietary AI models and services (like Google’s Vertex AI, AWS Bedrock, Azure OpenAI Service) to ensure that as AI adoption grows, customers remain in their ecosystems. Market share data shows cloud leaders still command the majority of AI workloads, and with each adding AI-specialized infrastructure (sometimes even their own silicon, like Google TPU or Amazon Trainium), they aim to hold onto that lead. In short, the cloud giants are using AI as both an offensive and defensive play – offensive in creating new AI offerings to sell, defensive in bundling AI features to prevent client attrition to upstart platforms.

  • Semiconductors: As discussed, NVIDIA dominates the AI chipset market with an estimated 80–90% share of AI accelerator hardware. This dominance means NVIDIA largely dictates pricing and supply, which has been a boon to its profits but a concern for every company that relies on GPUs. Competitors like AMD, as well as AI chip startups and initiatives (Graphcore, Google’s TPU, etc.), are attempting to chip away at NVIDIA’s lead. By 2025, there is rising competition in the AI chip space – for example, AMD’s latest MI300 GPUs and new AI-focused chips are coming online, and Intel is also investing in neural processors. However, none have yet matched the ecosystem and software support (CUDA platform) that NVIDIA offers. Thus, NVIDIA’s market share remains strong, and it continues to be a critical supplier (some would say bottleneck) for the entire industry. For AI companies, this means their fate is partly tied to NVIDIA’s production and pricing unless they pursue alternatives. It’s telling that even the largest tech firms (like Oracle’s Larry Ellison) joke about “begging Jensen (Nvidia’s CEO) for GPUs” because demand is so high. This dynamic underscores how value (and power) concentrates at the infrastructure source: whoever controls the key components (GPUs, in this case) holds tremendous sway over the AI market’s economics.

  • Enterprise Software and Incumbents: Many incumbent software firms have quickly integrated generative AI into their products, ensuring they retain customers and add value without ceding them to new AI-native challengers. For example, Microsoft infused AI copilots across Office and its developer tools, Adobe launched Firefly generative AI in its Creative Cloud, and Salesforce introduced Einstein GPT for CRM – all moves that package AI features into existing dominant platforms. This has helped these incumbents preserve market share by making their platforms “smarter” rather than letting third-party AI tools intercept their users. So far, customers seem inclined to use AI capabilities from the vendors they already trust (and have data with), which means net new AI startups in those domains must overcome both technical and distribution moats. The pattern emerging is that AI becomes a layer in the stack, not a separate product – often an expected feature. This can be bad news for standalone AI software companies (their feature might just become a checkbox offering in a larger suite), but it benefits the incumbents who can charge more or improve retention with these features. Indeed, some software firms have started to monetize AI as an add-on – e.g. offering advanced AI features in a higher pricing tier – which could improve their margins if customers accept the upsell.

  • Consulting and Services Firms: The big professional service firms (Accenture, Deloitte, PwC, etc.) and IT integrators (IBM Consulting, Infosys, etc.) are solidifying their role as key enablers in the AI ecosystem. They may not create AI models, but they deeply embed themselves with client organizations to implement AI solutions. By doing so, they capture a portion of the value that might otherwise be attributed to the software itself. For instance, a bank might license an AI platform, but the consultants who tailor that platform to the bank’s processes and ensure it delivers ROI might capture fees comparable to or greater than the software licensing cost. This dynamic means services firms maintain a strong market position – their share of the “AI pie” grows as overall enterprise AI spending grows. They are somewhat sector-agnostic and follow the money: wherever AI is being adopted (finance, healthcare, retail), consulting firms are there to guide and profit. As noted, the AI consulting market is enormous and growing; these firms are likely to remain influential in steering AI adoption and thereby capturing value via expertise and execution.

Overall, market share trends in AI show the rich getting richer in many cases – the already-dominant players in tech have leveraged AI to entrench themselves further. However, there are pressure points and shifts to watch. The rise of open-source AI models (like Meta’s LLaMA and the mentioned DeepSeek R1) could commoditize some layers and empower new players who build on these models without needing a giant research budget. If open models remain competitive, the “proprietary model” providers (including some well-funded startups) could see their advantage wane, as one analysis warns. That would flip the script, making data quality, domain expertise, and integration the key differentiators (again favoring those who have distribution or vertical knowledge). Additionally, hardware diversification (new chips) could slowly erode NVIDIA’s chokehold, which might lower infrastructure costs and shift some value back to AI service providers (if GPU prices drop, AI software margins can improve).

For now, though, the firms with deep pockets, infrastructure control, or entrenched customer bases are in the strongest position. Startups and smaller players must align with or find gaps around these giants – for example, targeting underserved industries, offering specialized data or privacy that big platforms don’t provide, or innovating in model efficiency. The paradox remains: many newcomers drive innovation, but the incumbents capture much of the commercial value.

Conclusion: Strategic Implications for AI Stakeholders

The AI Value Capture Paradox in 2025 teaches a clear lesson: innovating in AI is necessary, but not sufficient, for business success. Real value (revenue, profit, strategic control) is gravitating toward those who provide the indispensable foundations – whether that’s the infrastructure (computing power, cloud services, data pipelines) or the expert guidance to implement AI effectively. Meanwhile, simply building an AI-powered feature or product, no matter how novel, does not guarantee a big share of industry profits.

For AI-focused companies (startups and tech firms), this means strategic clarity is crucial. Companies should identify where they sit in the AI value chain and how they can capture a defensible share of value. If you’re offering an AI application, consider bundling in proprietary data, industry-specific expertise, or workflow integration that customers will pay premium for – otherwise you risk being undercut by open-source models or being subsumed as a feature in a larger platform. Pay attention to your cost structure: optimize your cloud usage, explore hybrid infrastructure (as even your clients might prefer on-prem solutions), and perhaps negotiate partnerships with infrastructure providers so you aren’t drowned by operational costs. The current landscape suggests that partnering “up the stack” (with cloud or chip providers) and “down the stack” (with consultants or enterprise integrators) can help ensure you actually capture some revenue from the end-to-end value delivered. In short, differentiation and cost discipline are key – the era of just having a cool model and burning cash for growth is ending, as ROI becomes king.

For investors, the findings underscore the wisdom of a balanced approach. The hype around generative AI led to rich valuations for pure-play AI software companies, but the long-term winners may be those with moats in infrastructure efficiency or distribution. Investors would do well to probe a company’s unit economics and value chain position. Is this startup going to pay AWS or Azure $0.50 of every revenue dollar for compute? If so, the margin will always be slim. Is a company building tools that every AI adopter will need (security, data management, cost optimization)? Those could capture broad value. The trend toward open models and commoditization of algorithms means investors might also favor businesses that own unique data assets or customer relationships – those are harder for a giant to copy and can justify value capture. We’ve also seen that hardware and infrastructure bets (while capital intensive) have paid off hugely in this cycle (witness NVIDIA’s valuation and the success of cloud providers). Thus, an investor might tilt toward the “picks and shovels” plays, or at least ensure any AI application investment has a clear path to monetization (perhaps via enterprise SaaS model, or by being acquired by a larger platform looking to add that feature).

For enterprises and AI adopters, the paradox serves as a caution: adopting AI is not just a tech project, but a strategic economic decision. Business leaders should be mindful of who captures the value when they implement AI. If you simply buy a pre-trained model service from a cloud provider, most of the value might actually accrue to that provider (especially if the deployment drives up your cloud bill without a proportional revenue increase). Enterprises should strategize to capture value for themselves – for example, by investing in internal capability (upskill teams, build a private AI stack for core IP), negotiating contracts that align costs with outcomes, and focusing AI efforts where they genuinely improve revenue or efficiency. The rise of AI consulting indicates many companies realize they need help; just ensure that the knowledge transfer happens so that your organization retains long-term benefits, not only the consultants. In essence, companies adopting AI should aim to become owners – of data, of tailored models, of expertise – not just consumers. This will help them preserve margins and competitive advantage as AI becomes ubiquitous.

In conclusion, the AI industry of 2025 reveals that value capture often favors the “infrastructure and integration” side over “features and models.” High margins reside with the cloud titans, chipmakers, and perhaps those adept at weaving AI into business fabric, rather than with every AI app developer. But this landscape is still evolving. As AI technology matures and diffuses, we may see the paradox shift – if infrastructure becomes commodity (cheaper and widely available), then differentiation might once again move up the stack. For now, however, any AI business model or corporate strategy must account for the current reality: to win in AI, it’s not just about building the smartest model, but about positioning oneself at the right layer of the value chain. Companies and investors that internalize this – optimizing for revenue concentration, aligning with where margins are high, and ensuring they aren’t simply enriching their suppliers – will be best positioned to thrive in the next chapter of the AI revolution. The “paradox” can be overcome by those who approach AI with a holistic strategy, capturing value for themselves and delivering true ROI for their customers, rather than just chasing the AI hype.

Previous
Previous

Harnessing the Chain Reaction