Ethics Can’t Be Hardwired

Why AI ethics fails when responsibility is treated as a technical problem

I have not written here since September. That pause was intentional. I stepped back from writing to stay closer to the practical work. The AI conversation did not slow down in that time. If anything, it became louder, faster, and more confident. What felt missing was not commentary, but clarity.

This edition is about getting clear on what matters when organisations treat ethics as a technical problem.

The Question Organisations Keep Asking

As AI systems grow more capable, organisations keep returning to a familiar question.

“How do we build ethical AI?”

It sounds responsible. It is also increasingly unhelpful.

The assumption underneath it is that ethics behaves like software, something that can be specified, encoded, and enforced once. That assumption does not hold in systems that learn, adapt, and scale human behaviour.

Why Ethics Resists Being Encoded

AI ethicist De Kai has described the idea of hardwiring ethics into AI systems as a pipe dream. Not because ethics does not matter, but because it does not behave predictably.

Ethical principles collide. Cultures diverge. Context shifts. Even humans cannot agree on what the “right” action is in many real situations, which is why moral dilemmas refuse to stay theoretical.

Modern AI systems do not follow static logic. They learn from us, including our inconsistencies.

Subscribe to keep reading

This content is free, but you must be subscribed to Beyond Work Monthly to continue reading.

I consent to receive newsletters via email. Sign up Terms of service.

Already a subscriber?Sign in.Not now

Reply

or to participate.