Race After Technology By Ruha Benjamin

đź“– Race After Technology

By Ruha Benjamin

Review by Alexandra Carvalho

What if the problem isn’t just that technology is biased, but that it’s working exactly as designed?

In Race After Technology, Ruha Benjamin confronts the seductive illusion of neutrality in tech. She exposes how racism, far from being an outdated glitch, is coded directly into our systems, disguised as optimisation, efficiency, and innovation. Her framework of the “New Jim Code” brilliantly names the fusion of coded bias and digital denial: technologies that deepen racial inequities while pretending to rise above them.

This is not a book about rogue bad actors or malfunctioning models. It’s about power. Benjamin argues that modern AI and data systems are not failing to be fair, they are succeeding at reproducing a social order that’s already unjust. Her examples span predictive policing, facial recognition, hiring algorithms, and data fusion centres, showing how digital tools often accelerate rather than mitigate harm, especially toward Black communities.

But this is no moral panic. Benjamin writes with precision, academic rigour, and lived understanding. She dismantles techno-solutionism and the fetishisation of objectivity in tech, replacing them with a call for abolitionist tools, systems designed to dismantle, not refine, inequality.

Key Insight

“The road to inequity is paved with technical fixes.”
The New Jim Code isn’t overt, it thrives on invisibility, claiming progress while perpetuating disposability. When bias is embedded at design, scaled at speed, and shielded by opacity, the harm becomes systemic.

Why It Matters

For those of us working in AI governance and Responsible AI, Race After Technology is essential. It warns us that “bias audits” and “diversity dashboards” are not enough. Cosmetic fixes can conceal deeper structural issues. We need design justice, not performative compliance.

Benjamin’s lens forces a reckoning: If your AI governance framework doesn’t account for how race, power, and profit intersect, it’s not Responsible AI. It’s reputational damage control.

She challenges us to go beyond bias detection and into the architecture of exclusion: who defines what counts as risk, who gets coded as criminal, who is visible, and who is filtered out entirely.

Read This If You Are:

  • An AI builder tasked with “removing bias”, and want to know why it keeps coming back.
  • A policymaker writing regulation for algorithmic accountability, and ready to go beyond checklists.
  • A systems architect or ethicist searching for design principles that serve justice, not just compliance.

It also underpins why AI Governance must be intersectional, and why race-blind policies fail. Benjamin’s idea that race itself is a kind of technology reshapes how we think about data inputs, model outputs, and decision-making pipelines.

What You’ll Walk Away With

  • A language to describe what you may have felt but couldn’t name: that many AI systems don’t just reflect injustice, they institutionalise it.
  • A mandate to interrogate your models, datasets, and goals with a sharper, historically aware lens.
  • The conviction that equity isn’t a UX feature, it’s a foundational requirement.

📚 Buy the Book

View on Amazon

Scroll to Top