But Who Can Replace a Man?

In June 1958, science fiction writer Brian W. Aldiss published “But Who Can Replace a Man?” As a teenager, I discovered this thought-provoking short story while browsing the sci-fi shelves of my local library.

Cover illustration of "Who Can Replace A Man" by Brian Aldis
Cover of “Who Can Replace A Man” by Brian Aldiss, originally published by Infinity Science Fiction in 1958.

Like much science fiction, Aldiss’s tale explores humanity’s fraught relationship with technology in a dystopian future. The story depicts a world where humans are largely extinct, leaving machines with varying levels of intelligence to maintain society. When they discover humanity is gone, a group of increasingly dysfunctional machines tries to determine their purpose. You can read it here.
But_Who_Can_Replace_A_Man

(Thank you, Wayback Machine!)

Can Generative AI Replace a Man?

It’s no coincidence that this story has come to mind recently. Written over half a century ago, Aldiss’s satirical exploration of intelligence, hierarchy, and purpose eerily anticipates the rise of generative AI systems like ChatGPT.

The field-minder, seed distributor, radio operator, and other machines interact through rigid hierarchies and limited autonomy, leading to absurd conflicts, poor decisions, and an inability to work together effectively. Despite their artificial intelligence, their inability to adapt or cooperate effectively without humans underscores their limitations.

Large Language Models (LLMs) like ChatGPT demonstrate what looks like intelligence by generating human-like responses, yet lack comprehension, intentions, or ethical grounding. Like the machines in Aldiss’s story, such systems can often do well within certain boundaries. But ultimately they do not “understand” nuanced or value-driven concepts.

Aldiss critiques both the risks of delegating control to artificial systems and the hubris of assuming machines can entirely replace humans. His work remains a cautionary allegory, particularly relevant as we confront the implications of artificial general intelligence (AGI).

What can we learn from Aldiss’s story?

Over-Reliance Without Oversight: The machines’ dysfunction highlights how systems can falter without clear human guidance. Similarly, generative AI systems require careful oversight to align with human values and goals.

Hierarchical and Narrow Programming: Rigid hierarchies and predefined tasks limit the machines, much like how generative AI today struggles to adapt ethically or contextually outside its training.

Purpose and Alignment: Aldiss’s machines lack purpose without humans in the loop. Similarly, AGI systems need explicit alignment mechanisms to prevent unintended consequences.

Ethical and Social Implications: The story critiques the blind replacement of human labor and decision-making with machines, cautioning against losing sight of human agency and responsibility during technological advancement.

Balancing Innovation with Ethics

Today’s LLMs may not yet be autonomous, but they already challenge the balance between augmenting human capabilities and outright replacement. Aldiss’s story reminds us that technological advancement must go hand-in-hand with ethical safeguards and critical oversight. It’s a lesson we must heed as generative AI shapes the future.

Events operate by stories

Events operate by stories: the cover of the book "record of a spaceborn few" by becky chambersEvents operate by stories.

“Our species doesn’t operate by reality. It operates by stories. Cities are a story. Money is a story. Space was a story, once. A king tells us a story about who we are and why we’re great, and that story is enough to make us go kill people who tell a different story. Or maybe the people kill the king because they don’t like his story and have begun to tell themselves a different one.”
—Isabel, in Record of a Spaceborn Few by Becky Chambers

I love science fiction, which Pamela Sargent calls “the literature of ideas”. In a world where it sometimes seems change is impossible, science fiction explores how our future will be different. Science fiction is also especially rich with the possibility of introducing cognitive dissonance: the mental discomfort we feel when aware of two contradictory ideas at the same time.

Above all, good science fiction excels at telling stories. Powerful stories. Stories that routinely predict the future: earth orbit satellites, the surveillance state, cell phones, electric submarines, climate change, electronic media, and the Cold War were all foreshadowed by science fiction stories long before they came to pass. Science fiction introduces possible futures, some of which come to pass, by using the power of stories.

Events operate by stories

Like science fiction, events also create futures, and events operate by stories. Just as good stories have a story arc, coherent events have a conference arc. In addition, every event participant creates their own story at an event, just as each reader or viewer individually absorbs and experiences a book or movie story.

The promise of events springs from the reality that we are the stories we tell about ourselves. The stories that events tell and we internalize change us.

It’s incumbent on all of us who create and design events to think carefully and creatively about the stories our events tell. When we do so successfully, the power of stories shapes and maximizes participants’ individual and collective outcomes — and changes lives.

One reason I like science fiction

One reason I like science fiction: the cover of William Gibson's book "Idoru"Here’s one reason I like science fiction. The other day I was rereading Idoru by William Gibson, and the following passage spoke to me. It’s a conversation between two teenagers: Chia McKenzie from Seattle who is visiting Mitusko Mimura in Japan in the near future…

Mitsuko was getting her computer out. It was one of those soft, transparent Korean units, the kind that looked like a flat bag of clear white jelly with a bunch of colored jujubes inside. Chia unzipped her bag and pulled her Sandbenders out.

“What is that?” Mitsuko asked.

“My computer.”

Mitsuko was clearly impressed. “It is by Harley-Davidson?”

“It was made by the Sandbenders,” Chia said, finding her goggles and gloves. “They’re a commune, down on the Oregon coast. They do these and they do software.”

“It is American?”

“Sure.”

“I had not known Americans made computers,” Mitsuko said.

Why did this passage strike a chord? I’ve found that noticing such responses can be useful, or at least interesting.

I realized that science fiction is especially rich in possibility for introducing cognitive dissonance: the mental discomfort we feel when aware of two contradictory ideas at the same time. When we notice this discomfort we tend to unconsciously reduce it in various ways.

In the above passage, the reader casually discovers that a near-future Japanese teenager has no idea that America still makes computers [“It is by Harley-Davidson?“, “I had not known Americans made computers.”]. Reading fiction, it’s easier to notice and put up with cognitive dissonance because — well, because we know it’s just fiction.

One reason I like science fiction is that it’s easier to notice sneakily introduced, provocative ideas about how the world could be different — and then, perhaps, start to wonder what it would be like if the world really was that way.