Contrawise

Contrawise

Share this post

Contrawise
Contrawise
Why Tech Needs the Humanities in the AI Era
Machinations: AI througha cultural lens

Why Tech Needs the Humanities in the AI Era

Machinations: AI through a cultural lens, Vol. 1

Dario Llinares's avatar
Dario Llinares
May 08, 2025
∙ Paid
6

Share this post

Contrawise
Contrawise
Why Tech Needs the Humanities in the AI Era
8
Share

This is the first entry in a new series: Machinations - a new section on my Substack I reflection on narratives so-called AI as a cultural phenomena. If you’re interested in how cultural theory meets emerging tech, subscribe and join the dialogue.


We often talk about artificial intelligence as if it were a force of nature—inevitable, impersonal, beyond our control. A wave of progress crashing over society, sweeping away old ways of thinking. Klaus Schwab calls it the Fourth Industrial Revolution. The message is clear: get on board or get left behind.

But framing AI as destiny rather than design is more than misleading—it’s dangerous. It shrinks our sense of agency. It implies that technology evolves on a linear track, detached from human values, choices, or consequences. In this worldview, to hesitate, to question, is to be a luddite. A modern-day heretic with their fingers in their ears shouting “la la la” - while the smart people get on with building the future.

And who exactly is building that future? A narrow class of techno-optimists—mostly white, mostly male, mostly billionaires—who speak with unshakable confidence about where we’re going. Gates, Musk, Besos, Altman: the usual suspects. Evangelisers with a seemingly limitless scope for presenting the upside of AI development, with very little room in their discourse for the impacts and consequence that will result in this shift.

Why would they not be excited about this future? It’s in there economic interests to fast-track these ideas into our lives, and they possess the economic means to personally mitigate any impacts.

These techno-futurist visions of AI echo an old pattern, what we might call the default setting of modernity: technological determinism. Every major innovation, from the printing press to the smartphone, arrives first as a disruption, then as a necessity, and finally as something we can’t imagine living without. We didn’t know we needed it, until suddenly we couldn’t do without it.

But this mindset carries real risks. It encourages us to focus on what can be built, rather than why it’s being built, how it will shape our lives, and who ultimately benefits. It masks the reality that every technology - no matter how advanced - is the product of human choices. Choices shaped by messy, contingent forces: psychology, ideology, history, economics.

Elon Musk’s tweet has the air of inevitability…but is this a good or bad thing

And when those technologies enter the world, they’re rarely used in the ways their creators predicted. They’re absorbed into the texture of everyday life - twisted, co-opted, repurposed. Not by design, but by the wonderful chaos of culture.

Like many of you, I feel daunted—not just by the uncertainty of the future, but by the intensity of the present. Generative AI is not a distant force; it’s already here, already reshaping the cultural and professional spaces I’ve spent decades working in. My thoughts spiral in a thousand directions: What’s being lost? What’s being gained? And how should we respond, not just as individuals, but as a society?

For those of us rooted in the humanities, this moment feels both urgent and strangely familiar. These disciplines, so often dismissed as outdated or ornamental, have always asked the difficult questions. They’ve always grappled with ambiguity, contradiction, and the ethics of progress. Now, as AI accelerates through culture and industry, the critical insights the humanities offer are more vital than ever.

Not because we’re the last romantics, clinging to some fading vision of humanism (though there’s a place for that, too). But because we understand that technological futures aren’t built in labs alone. They are cultural projects. And culture is where the humanities do their deepest work.

Philosophy, history, literature, the arts, these aren’t adjuncts to innovation. They’re the conditions that make meaningful innovation possible. They help us interrogate assumptions, anticipate consequences, and foreground the human experiences that AI too often flattens or forgets.

Challenging AI as Destiny

This is why we need to challenge the stories we tell about technology, because stories shape perception, and perception shapes power. One of the most persistent myths in our digital age is that innovation is inevitable. That progress has its own momentum, and our only real option is to adapt or be left behind. It's a comforting fiction for those in control, because if the future is already written, no one is accountable for the script.

But AI disrupts even that tidy narrative. It feels less like a march forward than the chaotic sprint it seems to be; an open-ended race driven by hype, capital, and experimentation with few guardrails. If we’ve barely begun to understand the social experiment that is social media, AI arrives like an avalanche: faster, more pervasive, more consequential, and with little or no time to pause, reflect or question.

That’s why we must challenge this sense of inevitability. Because when we treat AI as destiny rather than design, we obscure the human decisions shaping it at every turn. From the data it's trained on, to the objectives it’s built to achieve, to the interests it serves, or excludes, AI is not neutral . It is constructed.

And like all constructions, it bears the fingerprints of power, ideology, and social context.

This is precisely where the arts and humanities prove indispensable. They disrupt the dominant narrative that says we have no agency. They help us ask better questions, not just about what a system does, but who it serves, what it obscures, and what futures we might still choose to imagine.

I’m reminded here of Raymond Williams, one of the most insightful critics of technological determinism. In Television: Technology and Cultural Form, he dismantled the idea that technologies emerge from pure innovation, inevitably reshaping the societies they enter. As he put it:

“The basic assumption of technological determinism is that a new technology – a printing press or a communications satellite – ‘emerges’ from technical study and experiment. It then changes the society or sector into which it has ‘emerged’. ‘We’ adapt to it because it is the new modern way.”

Against this narrative, Williams proposed a theory of “cultural form”: the idea that technologies reflect and reinforce the social conditions of their creation. They do not arrive fully formed, altering society from without. They are always embedded, culturally, economically, ideologically, within the systems that produce them.

More recently, scholars like Ruha Benjamin have shown how technologies that appear neutral or progressive can actually reinforce and deepen existing social inequalities. In her concept of the “New Jim Code”, she explores how advanced systems often replicate and amplify racial bias, not through overt malice, but through design choices that reflect the values and priorities of their creators. What looks like innovation, she argues, frequently functions as a more subtle, and insidious, form of systemic power.

And yet, even within the tech industry itself, there are flickers of recognition. Back in October last year Dario Amodei, CEO of the AI company Anthropic, wrote a blog post titled Machines of Loving Grace (a knowingly poetic reference, lifted from Brautigan). It’s worth checking out as both manifesto of sorts, and to get a sense of not just how these tech futurist think, but how they think.

Amid the usual futurist optimism, Amodei does something a little unusual for a tech leader: he leans into some of the uncomfortable truths:

“Unfortunately, I see no strong reason to believe AI will preferentially or structurally advance democracy and peace, in the same way that I think it will structurally advance human health and alleviate poverty. …If anything, some structural factors seem worrying: AI seems likely to enable much better propaganda and surveillance, both major tools in the autocrat’s toolkit. It’s therefore up to us as individual actors to tilt things in the right direction: if we want AI to favour democracy and individual rights, we are going to have to fight for that outcome”.

It’s a candid admission (maybe a rare one?) from someone in Amodei’s position. He acknowledges that AI won’t naturally bend toward democratic values. That it may, in fact, be structurally aligned with authoritarian tools. And that responsibility falls not on the system, but on us.

Elon Musk is attuned to the dangers of AI, but as this Time piece highlights, his weariness (and perhaps even genuine concern) underpins a sense of his own need to be a forefront of narratives technological inno

But one might cynically ask: Is this genuine moral reckoning, or just reputational hygiene? When tech leaders speak the language of values, are they engaging in ethical inquiry, or simply paying it lip service? Despite all the sober forecasting (and the more dystopian doomsaying that circulates), there isn’t really any serious thought of slowing down or redirecting the trajectory of AI development.

As AI becomes further woven into the fabric of daily life, the stakes are not abstract. There will be winners and losers. There already are.

The Shrinking of Ethics

Actually, it’s not enough to ask who’s steering AI. We also have to ask: by what compass?

In recent years, one of the most troubling trends has been the slow erosion of what “ethics” actually means in the tech world. A word that once carried philosophical gravity has been repackaged into something far more superficial. In many corporate and policy contexts, “ethics” now functions less as a framework for moral reasoning and more as a tool for reputational risk management. It becomes a branding exercise: anodyne principles, glossy white papers, and advisory boards that pay lip service to the notion of values.

It’s not that ethical questions aren’t being asked, it’s that they’re being neutered. Often, they’re subordinated to business priorities, reinterpreted through the lens of compliance. Even more worryingly, in The AI Mirror, Shannon Vallor warns that we may begin outsourcing moral judgment altogether, deferring decisions to AI systems in a process she calls “deskilling.”

This kind of moral outsourcing threatens to erode our very capacity for ethical reflection. You can already see its traces in the utopian rhetoric of tech evangelism.

Sometimes the pushback takes another form: a call to abandon the language of ethics entirely and focus instead on power. Ruha Benjamin have argues, this is a false choice. We cannot talk meaningfully about power without also invoking concepts like justice, harm, solidarity, and human dignity. Ethics and power are not opposites, they are entangled. The danger is when ethics becomes power’s handmaiden, rather than its check.

So how do we reclaim ethics from this kind of managerial flattening?

We start by returning to its roots, not as a corporate checklist, but as a mode of inquiry into how we live together.

This is where the humanities are not just useful, but indispensable. Philosophy, literature, and history don’t provide easy answers, but they cultivate the right kinds of questions. They remind us that ethics is not simply about what is permissible, but what is meaningful. Not just what’s efficient, but what’s right.

In other words, the humanities insist that ethics is not a luxury, it’s the ground on which our technological futures are built. They hold open the space to ask: What kind of life do we want to lead? And what do we owe one another, in the age of machines?

Machinations is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.

What the Humanities Actually Do?

Let’s be honest: the humanities won’t debug your model or streamline your data pipeline. They won’t boost engagement metrics or improve your quarterly KPIs. But what they do offer - what they have always offered - is something far more elusive and essential: a framework for interpreting what technology is doing to us, and what we are doing to ourselves through technology.

They don’t give us cleaner code. They give us cultural memory.

They bring history to bear on hype. They ask: where did this system come from? Whose values shaped it? What forms of power does it extend, or disguise?

They remind us that meaning isn’t the same as output. Just because something can be generated doesn’t mean it’s worth consuming. Not every prompt leads to insight. Not every answer is knowledge. The humanities help us dwell in the uncomfortable pause between question and understanding.

They nuance the cold logic of the machine. Not by softening its edges or making it “relatable,” but by insisting that the human is never reducible to data points. We are not just users or nodes or endpoints. We are embodied, emotional, ethical beings, unfinished, contradictory, plural.

And perhaps most importantly, they resist the tyranny of metrics. The logic of optimisation - faster, cheaper, better - is not a moral philosophy. The humanities restore value to things that can’t be benchmarked or A/B tested: ambiguity, care, memory, solidarity, slowness.

Not every problem is a puzzle to be solved. Some are conditions to be lived with. Some are questions that need to be held open.

Beyond Soft Skills: Pluralism, Process, Participation

This isn’t about adding a little humanism to an otherwise technical pipeline. It’s about reimagining the very terms on which we build and assess technological systems. The humanities don’t just expand our ethical vocabulary, they advocate for a conscious shaping our sense of what matters, and why.

A humanities-informed approach to AI doesn’t just bring a different vibe. It brings a different vision, one rooted in complexity, contradiction, and care. Rather than collapsing ethics into efficiency, it offers a broader, more durable framework.

If I may be so bold as to offer three possible core principles:

1. Pluralism
There is no single “master value” that can govern AI. No universal metric - whether trust, fairness, or transparency - can capture the contradictions of human life. We value freedom and safety. Equality and uniqueness. Care and autonomy. The humanities help us hold space for these tensions without flattening them into dashboards.

2. Process Over Outcome
Ethical systems aren’t just about what they produce, but how they operate. A cancer diagnosis and a criminal sentence may both involve prediction, but we care deeply about the process behind the latter. The humanities teach us to attend to dignity, accountability, and the difference between procedure and mere calculation.

3. Participation
The future of AI cannot be designed behind closed doors. The humanities insist on democratic dialogue, cultural literacy, and shared meaning-making. Technology is not something people simply live with, it’s something they must be empowered to shape. Not as users, but as citizens.

The Kantian Reminder

If we want a different future for AI, one rooted in pluralism, care, and public accountability, then we need to reimagine the institutions that are shaping it.

For universities, this means rethinking the value of humanities. Philosophy, history, literature, and the arts should be integrated across disciplines, and especially in those fields, engineering, design, computer science, where students are not just building systems, but building the very conditions of experience. The question is no longer whether humanities are “useful,” but whether we can afford their absence in the spaces where our future is being prototyped.

For companies, it means drawing on this knowledge not as a late-stage PR gesture, but in the earliest phases of ideation and development. Yes, capitalism tends to privilege speed, scale, and bottom lines. But even on its own terms, there is value, economic, social, existential, in designing technologies that enhance human experience rather than diminish, whether through instrumentalisation or addiction.

If the so-called” revolution” is coming, we should be clear-eyed about where it might lead. A future of techno-feudalism, where power pools into ever fewer hands, is not a glitch. It’s a trajectory. And we still have time to bend it.

Perhaps I’m being Pollyanna. But there has to be a least a whisper of constructive critique, to counter both the utopian hype and what might be termed out and out “ostrichism”.

In the 18th century, Immanuel Kant wrote a short essay called The Conflict of the Faculties. In it, he argued that philosophy, and by extension the humanities, was the part of the university most responsible for truth. Not because it had superior knowledge, but because it was structurally positioned to critique the dominant powers of the time: law, theology, medicine.

Today, we might say the same of AI. It is no longer a neutral tool, it is increasingly aligned with the interests of corporate, military, and state power. And the humanities, far from being obsolete, remain the most capable domain for interrogating that alignment. Not to obstruct innovation, but to remind us that knowledge is never value-neutral, and that progress without reflection is just acceleration.

Perhaps we will reach a point where outright resistance is necessary. Some will say we are there now.

At the very least, as someone who wants to suggest the fundamental principle of AI (and all technology really) should be to expand and enhance, in intention and form, the human experience.

Not to say “no” to AI, but to ask, relentlessly: for whom, by whom, and to what end?

The humanities should play a key role in this, not as compliance consultants or ethical fig leaves, but as co-authors of the cultural narrative we are writing together. Their task is not to slow the future, but to ensure it remains recognisably human.

Because we’re not just building tools. We’re building futures.

And the question isn’t just: What can AI do?

It’s:
What should we ask of it?
What do we want it to mean?
And who gets to decide?


Thanks for reading this first volley of Machinations. If you have liked what you have read I’d really appreciate if you can restack/share to your networks. This is a gesture of human curatorial practice which works better than any algorithm recommendation.

Share

If you’re not already a subscriber, please consider doing so by hitting the button below. Become part of the network of curious, fascinating people!!

A paid subscription is £3.50 per month and you will get access to the full articles and podcasts I produce. There is a lot of work that goes into the writing and podcasting, so becoming a paying subscriber really help support the continuation of the work.

I’ll also send you and physical postcard, wherever you may reside, how can one resist:

Become a paid Subscriber

If you don’t want to subscribe but could see your way offering a small tip the labour of producing the work, hit the button below.

Buy me a coffee

For paid subscribers below is a list of resources and recommendations of reading and listening related to AI and culture:

Keep reading with a 7-day free trial

Subscribe to Contrawise to keep reading this post and get 7 days of free access to the full post archives.

Already a paid subscriber? Sign in
© 2025 Dario Llinares
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share