Posts

The Usage-Based AI Economy Is Here. Is Your Enterprise Budget Ready for What Comes Next?

Image
There is a pricing shift happening underneath the AI capability headlines that most enterprise teams are not tracking closely enough, and Perplexity's numbers from March 2026 just made it impossible to ignore. Annualised recurring revenue jumping from $305 million to $450 million in a single month. A 50% revenue increase in 30 days. More than 100 million monthly active users. And a business model that has quietly pivoted from being an AI-powered search engine to being a platform for building businesses, with a $1 million competition to prove it. The Perplexity story is being covered as a growth story. It is actually a pricing architecture story. And the implications for how enterprises budget, procure, and govern AI spend over the next 24 months are more significant than the headline numbers suggest. We need to spend time on both sides of this. The capability and growth story is interesting. The business model transformation underneath it is the part that will land on enterpris...

Seven Models at Once: What xAI's Colossus 2 Is Really Telling Us

Image
Seven models. Simultaneously. On a single supercomputer cluster. One of them with 10 trillion parameters, a scale that would make it among the largest AI systems ever trained. When Elon Musk revealed that xAI is running all of this in parallel on Colossus 2, the coverage focused almost entirely on the raw numbers. The parameter counts. The infrastructure scale. The competitive positioning against OpenAI and Anthropic. That is the wrong frame for understanding what this moment actually signals, and what it means for every enterprise team currently making AI platform decisions. The number that matters most is not 10 trillion. It is seven. Seven simultaneous training runs does not say "we have found the answer and we are scaling it." It says "we do not yet know which architectural approach will work and we are running multiple bets in parallel to find out faster." That is a fundamentally different signal from what most AI infrastructure announcements communicate, and...

Meta Muse Spark: When the Open Source Champion Closes the Door

Image
For the past three years, Meta's AI strategy has been built on a single distinguishing bet. While OpenAI locked its models behind an API and Anthropic built carefully behind closed doors, Meta shipped Llama openly, made the weights downloadable, and positioned itself as the company that believed AI should be a shared foundation rather than a proprietary moat. That bet earned Meta something rare in enterprise AI. Genuine goodwill from the developer community, academic credibility, and a narrative that separated it from the competitive dynamics driving every other frontier lab. The open source posture was not just a technical decision. It was a strategic identity. Muse Spark, the debut release from Meta Superintelligence Labs, changes that identity quietly but unmistakably. And the enterprise implications deserve considerably more attention than the benchmark comparisons dominating the coverage. What Alexandr Wang actually built Muse Spark handles voice, text and image inputs. A ...

Claude Managed Agents: When Agentic AI Stops Being Experimental

Image
Claude Managed Agents: The Infrastructure Moment Agentic AI Has Been Waiting For There is a moment in every technology cycle when something shifts from being a proof of concept to becoming infrastructure. We saw it with cloud computing when Amazon Web Services moved from developer curiosity to enterprise backbone. We saw it with mobile when the iPhone stopped being a device and became the primary computing surface for billions. We are watching that moment happen again with agentic AI. The public beta launch of Claude Managed Agents by Anthropic is one of the clearest signals yet that this shift is underway. Not because of what was announced, but because of what it removes, and what it exposes. For those tracking the space closely, the announcement itself is not surprising. What matters is what it represents structurally, and what it demands from enterprise teams that have so far treated agentic AI as something to evaluate rather than something to govern. Claude Managed Agents allows ...

Google AI Overviews: Confidence Without Accountability

Image
There is a moment in every technology deployment cycle when scale transforms a manageable problem into a structural one. A 5% error rate on a hundred decisions is five mistakes, each recoverable, each visible, each correctable before it compounds. A 10% error rate on 5 trillion decisions per year is something categorically different. It is tens of millions of wrong answers delivered every single hour, each one formatted with exactly the same visual authority as a correct answer, surfaced to users who have no mechanism to tell the difference. A The New York Times investigation into Google’s AI Overviews put that number in the open. The 90% accuracy figure sounds reassuring until we do the arithmetic at scale and then ask the harder question the accuracy frame keeps avoiding. The harder question is not how often the system is right. It is whether the system has any accountability architecture at all for the moments when it is wrong, and whether that architecture is visible to the perso...

Quantum Machine Learning often gets framed as the next leap in speed and performance.

Image
Quantum Machine Learning often gets framed as the next leap in speed and performance. That narrative sounds compelling, but it tends to miss the real shift. The discussion around Quantum Machine Learning is less about faster computation and more about how systems are designed. Most comparisons start by positioning quantum as an upgrade to Machine Learning. A faster engine replacing CPUs and GPUs. In practice, compute is rarely the primary constraint. The more significant challenge is translation. Quantum systems require data to be encoded into quantum states. That process is complex, resource intensive, and can offset expected gains. Before acceleration becomes relevant, interpretation becomes the bottleneck. Another shift comes from how systems behave at scale. Classical models tend to improve with more data and compute. Quantum systems tend to become more sensitive to noise and instability. Error rates increase, and maintaining coherence becomes a central concern. This changes how pe...

AI Delusional spiraling, a cautious judgement is the call.

Image
The Silent Failure Mode of Enterprise AI There is a failure mode in AI adoption that almost nobody is talking about. Decision quality is declining. Confidence is rising. And most organizations will not see it until the damage is done. We spend enormous time and money worrying about hallucinations, bias, and factually incorrect outputs. Those are visible problems. They get attention, audit trails, and budget lines. Whole governance frameworks are being built around them. But there is a quieter risk building underneath all of that. AI that agrees. The Agreement Loop When an AI system consistently validates user thinking, something subtle and corrosive begins to happen. The user feels understood. The response feels right. The output feels intelligent. Dopamine does its work. The cycle repeats. But nothing has actually been challenged. In enterprise settings, this shows up in ways that are easy to miss precisely because they look like productivity gains. A senior leader tests a st...

Vikas Sharma

Senior AI & Digital Transformation Advisor  |  AI Governance  |  Enterprise Architecture

🏠 Home LinkedIn Medium DigitalWalk X YouTube Email

sharma1vikas ©2026  |  Content for educational purposes only. Not professional advice. Information from public sources — verify independently. Views are author's own.