
Blog - AI in Society
“Useless Eaters”? Why a Dangerous Term Has Returned in the U.S. AI Debate
Are humans becoming “useless” in the age of AI?
From star author Yuval Noah Harari’s warnings to Elon Musk’s existential questions, we explore why this debate matters - and why the narrative we choose now will shape the future of work and meaning.
A term that provokes unease in AI discussion
When Israeli historian Yuval Noah Harari warned in The Guardian back in 2016 about the rise of a “useless class,” many readers dismissed it as speculative. His warning went unheard: “As artificial intelligence gets smarter, more humans are pushed out of the job market. … Billions of people are useless, not through chance but by definition.”
In his bestseller book Homo Deus (2017) he sharpened the point: “This ‘useless class’ will not be merely unemployed – it will be unemployable.” A year later, in 21 Lessons for the 21st Century (2018), he added nuance: people would not be useless to their families or friends, but “useless from the viewpoint of the economic and political system.”
Technology leaders soon joined in. Elon Musk warned in 2021 that some careers could soon become “useless” under AI. By 2024, on stage at VivaTech in Paris, he put the question more starkly: „If you’re not needed, if there’s not a need for your labor, how do you find meaning? Do you feel useless?”
The discussion is nearly a decade old. But, what once seemed like a distant and far-fetched scenario has become part of the mainstream AI debate in the U.S. as the user adoption of Generative AI software is pretty much on the rise and shows ist sharp impact oft he U.S. labor market already. Even companies like Salesforce use the AI argument as the justification for major downsizing waves e.g. in customer service functions.
So, it is legitimate to ask if the principle of „Human-AI Collaboration“ is still our predominant target for both our economy and society. What will our labor be worth in the future when machines take over? This leads us to a very significant and dangeous part of the current AI discussion in the U.S. – the resurrection of the diabolical expression of the „useless eater“.
The historical burden of “useless eaters”
The re-emergence of the phrase “useless eaters” makes the debate around AI really unsettling. The term was coined in Nazi Germany to label people with disabilities, the elderly, and the chronically ill as societal “burdens.” It served as propaganda to justify their systematic persecution and extermination.
Today, the phrase appears in U.S. AI debates almost exclusively in quotation marks and with explicit criticism. But its very reappearance underscores how strongly the fear of redundancy resonates - and how quickly language can slip into dehumanization.
Why the U.S. is a hotspot for the debate
The United States has become the epicenter of this conversation. With limited welfare and healthcare systems, job loss is not just about income. It can mean the entire loss of status, security, home, and meaning.
This context fuels intense discussions around universal basic income (UBI). When you are out of work and business has no place for you, your government should take care of you and provide you with the basic necessities in life to get you a fair share of the productivity increase that companies achieve by help of AI and automation. However realistic and economically serious this principle may be – especially in countries like the U.S.
But the framing itself narrows the entire discussion around AI to to the level of damage control: how to compensate the “losers,” rather than how to create new roles through Human-AI collaboration. Once economic usefulness becomes the measure of human value, the path to stigmatization is alarmingly short.
The risk of a narrowing discourse
If the public conversation continues to focus only on the jobs that AI destroys, we risk locking ourselves into a zero-sum worldview: the “useful” versus the “useless.” In such a narrative, technological progress becomes synonymous with human decline, and every breakthrough in automation is framed as a social loss.
Over the next three to five years, this framing could have real consequences. Political debates may harden around redistribution alone - who deserves compensation and who does not - instead of broader visions of empowerment. Companies could find themselves under pressure to justify every automation step with defensive rhetoric about cost savings, rather than articulating how AI opens new horizons of value. Public opinion could shift toward fear and resentment, undermining trust in innovation and accelerating calls for restrictive regulation.
The danger is not simply economic. If society embraces the language of obsolescence, it risks reducing human worth to economic utility. That shift would echo into education, where curricula may be reshaped to train only for “survivable” jobs, and into politics, where entire demographics could be cast as liabilities.
In short: a discourse of harm and replacement could create a self-fulfilling prophecy, leaving millions feeling excluded from the AI-driven future.
A different path: Human–AI Collaboration
Yet another path is open to us — one where AI is seen not as a competitor, but as a collaborator. In this vision, AI doesn’t erase human potential; it expands it. The real story of the coming years could be about augmentation: machines handling what is repetitive, humans focusing on what is relational, creative, and strategic.
The next three to five years will be critical in showing how this works in practice:
- In healthcare, AI will increasingly read scans, predict risks, and manage data, while doctors spend more time on empathy, complex decision-making, and human connection.
- In education, AI tutors will personalize learning at scale, but teachers will remain vital as mentors, motivators, and guides to meaning.
- In business, AI will automate routine analysis, but human teams will excel in negotiation, judgment, and vision — the elements of leadership machines cannot replicate.
- In creativity, AI will suggest drafts, melodies, or designs, but humans will shape narratives, cultural resonance, and emotional truth.
The implications are profound. Rather than a society split into “useful” and “useless,” we can aim for one where every individual is empowered to do more of what only humans can do best. That requires investment in skills, inclusive policies, and a deliberate rejection of dehumanizing language.
The choice is ours: replace or empower. And the decisions made in the next few years — in boardrooms, classrooms, and parliaments — will determine which narrative becomes reality.
Conclusion: Choosing the narrative of the next five years
The return of a term as dark as “useless eaters” highlights the stakes of today’s AI debate. What began nearly a decade ago as Harari’s thought experiment and Musk’s provocation has now matured into a defining public question: Will AI diminish human value, or expand it?
The next three to five years will be decisive. If discourse narrows to loss and replacement, politics may default to redistribution, companies to defensive cost-cutting, and individuals to fear. But if we deliberately shift the narrative toward Human–AI Collaboration, we can create a different trajectory: one where technology handles the mechanical so that people can focus on the meaningful.
This is not a minor choice of words. The language we use today will shape the policies, investments, and mindsets of tomorrow. To call humans “useless” is to risk making them so; to see humans as collaborators is to unlock their potential.
At amedios, we believe the future must not be defined by sorting people into “useful” and “useless.” It must be defined by the co-creation of humans and machines. A future where no one is left behind, and where AI is a force for empowerment rather than exclusion.
