From Slop to Substance

Blog - AI Trends

From Slop to Substance.
Why the First Wave of AI Products Disappoints — and What We Must Learn

By amedios editorial team in collaboration with our AI Partner

The Broken Promise of AI

 

It was never supposed to be like this.


Artificial intelligence was meant to solve humanity’s most pressing problems: cure diseases, automate drudgery, reinvent education, and unlock a new age of human creativity. Instead, the first generation of mainstream AI products has delivered… a TikTok clone, endless meme generators, and an internet flooded with deepfakes and cheap content.

 

This is not a minor disappointment. It is a strategic warning sign. The gap between AI’s transformative potential and its current reality is widening. And unless we learn from the failures of this “first wave,” we risk wasting trillions of dollars, eroding public trust, and slowing one of the most powerful technologies of our time.

 

 

2. Slopware: The Consumer AI That Went Nowhere

 

The most visible AI products today are the least meaningful.

 

OpenAI’s Sora promised a leap forward in generative video. What it delivered was essentially a TikTok knock-off, powered by prompts instead of cameras. The novelty wears off in minutes, leaving behind an expensive toy, which unhesitatingly violates every personal right of living and deceased persons.

 

Runway and Pika Labs, both hailed as pioneers of text-to-video, offer technically impressive outputs, but struggle to find business-critical use cases beyond marketing stunts and social content.

 

Midjourney, a darling of the early generative art boom, produces stunning visuals. Yet for most enterprises, those images translate into neither revenue nor efficiency.

 

D-ID enables anyone to generate talking-head avatars. But in practice, it’s used more for novelty YouTube videos than enterprise transformation.

 

These tools are not “bad.” They are evidence of how far we still are from AI systems that change how companies operate or how societies function. As Amedios sees it, they represent a “phase one” problem: building for engagement, not for impact. They capture attention — but fail to capture value.

 

 

3. The Infrastructure Illusion: Scale Without Substance

 

The problem is not limited to consumer apps. Even the foundation models — the bedrock of the AI ecosystem — have underwhelmed.
 

GPT-5, after years of hype as a step toward AGI, arrived as little more than an incremental upgrade over GPT-4. It’s smarter in niche tasks but far from the paradigm shift many expected. It's a valuable every day companion for millions of people - but it is neither reliable or delivers consistent results. 

 

Anthropic’s Claude 3 excels at long-context reasoning and safer outputs, yet it remains primarily a productivity aid rather than a disruptive force.

 

Google Gemini promised multimodal intelligence that would redefine human-machine interaction. Instead, it has struggled to differentiate itself from OpenAI and has faced reliability issues.

 

Meta’s LLaMA 3 is powerful and open-source — but its real-world applications are still mostly experimental.

 

GenAI tools like ChatGPT or Claude are still companion you could never trust entirely. They represent team members who have to be micro-managed - at best. They enhance the collaboration of human and AI big time - but it would be naive or even extremely dangerous to replace human work unmonitored by them.

 

The underlying assumption, that more compute plus more data equals more intelligence, is showing its limits. Model scaling is hitting diminishing returns. To simplify things, we could say: a bigger engine doesn’t matter if the car has nowhere to go.

 

 

4. The Deepfake Dilemma: When Reality Becomes Optional

 

Perhaps the most worrying side-effect of the first AI wave is the collapse of trust in what we see and hear online. Deepfakes have already entered mainstream politics and culture - and we miss the public outcry:

 

  • In 2022, a fabricated video circulated on social media showing Ukrainian President Volodymyr Zelensky as he urged his troops to surrender - briefly sowing confusion.
     
  • An image of Pope Francis wearing a Balenciaga puffer jacket fooled millions, demonstrating how easily synthetic visuals go viral.
     
  • Generative tools now create entire news clips that look authentic enough to sway opinions or amplify disinformation campaigns.

 

Tools like Sora or D-ID make it trivial to replicate someone’s face, voice, or mannerisms - often without consent. And while companies attempt to add watermarks or labels, these are easily removed or ignored.

 

This is not just a PR issue. It’s a strategic risk for businesses (brand manipulation, fraud), governments (election interference), and society (erosion of shared reality). amedios’ position is clear: provenance, consent, and traceability must become standard features - not optional afterthoughts.

 

 

5. The Productivity Mirage: 95% of LLM Projects Fail

 

While consumer apps flood the web with low-value content, enterprises have faced a different problem: turning AI into measurable productivity gains.

 

A 2024 MIT study found that 95% of corporate LLM deployments failed to deliver a profit. A University of Chicago survey of 7,000 Danish companies showed minimal productivity gains from chatbots. Even Microsoft Copilot, despite deep Office integration, has yet to deliver the “10x productivity” many executives were promised. Early adopters report time savings but often no clear ROI.

 

The reason is simple: AI has too often been treated as a technology project rather than a business solution. Deployments lack clear success metrics, data readiness, and workflow integration.

 

We think, companies should invert their approach: start with a specific, measurable business problem - then choose the AI that solves it. Tools deployed without context are destined to disappoint.

 

 

6. The Bubble Dynamics: Billions Spent, Billions at Risk

 

The mismatch between promise and reality is not just technological - it’s economic.

 

Tech giants are spending staggering sums on infrastructure. Microsoft, Amazon, and Google are pouring hundreds of billions into data centers and GPUs. NVIDIA’s revenue has exploded. In 2024 alone, AI infrastructure investment surpassed the total cost of building the U.S. interstate highway system.

 

Yet the revenue to justify this spending is not materializing. Bain & Company estimates the AI industry will fall $800 billion short of revenue targets by 2030. GPT-5 might wow investors, but OpenAI’s revenue is estimated around $13 billion. This is tiny compared to what’s required to sustain this growth.

 

This creates systemic risk. Stock valuations, pension funds, and GDP growth are now tied to AI’s success. If revenue fails to follow, the “AI boom” could become a classic bubble whose burst would hit not only tech giants but also ordinary investors and the broader economy.

 

 

7. Five Strategic Lessons for the Next Phase

 

The failures of the first AI wave don’t mean AI is overrated. They mean we must get smarter about how we build, deploy, and govern it. Amedios distills five lessons for the decade ahead:

 

1. Purpose Before Product: Start with a real problem worth solving. Technology should serve a mission, not the other way around.

 

2. Value Before Virality: Engagement is meaningless without economic impact. Products must contribute to revenue, efficiency, or strategic differentiation.

 

3. Responsibility by Design: Consent, provenance, moderation, and ethical guardrails should be embedded into architecture from day one.

 

4. Focus Before Scale: Specialized, domain-specific AI solutions often deliver more value than monolithic, generalized models

.

5. Governance Before Growth: Transparency, auditability, and accountability are not “compliance costs.” They are prerequisites for long-term adoption and trust.

 

 

8. Conclusion – Maturity or Meltdown

 

The first wave of AI is best understood as a mirror: it reflects both the technology’s promise and our collective immaturity in harnessing it. We built tools to entertain rather than transform. We chased engagement over value. We scaled models without purpose. And in doing so, we squandered trust, capital, and momentum.

 

But this is not the end of the AI story. It is merely the beginning of its next chapter.

 

The companies that will define that chapter are not the ones flooding app stores with gimmicks. They are the ones building trustworthy systems, solving real problems, and delivering tangible business outcomes. They are the ones that embrace responsibility as a competitive advantage and governance as a growth enabler.

 

We believe that the future of AI belongs to those who shift from slop to substance - who treat AI not as a shiny distraction but as a strategic lever for human progress. That shift begins now.

 

The AI era is not failing, but its first wave has fallen short. Now is the time for suppliers and customers of AI-related products to demand more: more purpose, more value, more accountability. Only then will AI move from novelty to necessity.

Wir benötigen Ihre Zustimmung zum Laden der Übersetzungen

Wir nutzen einen Drittanbieter-Service, um den Inhalt der Website zu übersetzen, der möglicherweise Daten über Ihre Aktivitäten sammelt. Bitte überprüfen Sie die Details in der Datenschutzerklärung und akzeptieren Sie den Dienst, um die Übersetzungen zu sehen.