In June 1958, science fiction writer Brian W. Aldiss published “But Who Can Replace a Man?” As a teenager, I discovered this thought-provoking short story while browsing the sci-fi shelves of my local library.
Like much science fiction, Aldiss’s tale explores humanity’s fraught relationship with technology in a dystopian future. The story depicts a world where humans are largely extinct, leaving machines with varying levels of intelligence to maintain society. When they discover humanity is gone, a group of increasingly dysfunctional machines tries to determine their purpose. You can read it here. But_Who_Can_Replace_A_Man
It’s no coincidence that this story has come to mind recently. Written over half a century ago, Aldiss’s satirical exploration of intelligence, hierarchy, and purpose eerily anticipates the rise of generative AI systems like ChatGPT.
The field-minder, seed distributor, radio operator, and other machines interact through rigid hierarchies and limited autonomy, leading to absurd conflicts, poor decisions, and an inability to work together effectively. Despite their artificial intelligence, their inability to adapt or cooperate effectively without humans underscores their limitations.
Large Language Models (LLMs) like ChatGPT demonstrate what looks like intelligence by generating human-like responses, yet lack comprehension, intentions, or ethical grounding. Like the machines in Aldiss’s story, such systems can often do well within certain boundaries. But ultimately they do not “understand” nuanced or value-driven concepts.
Aldiss critiques both the risks of delegating control to artificial systems and the hubris of assuming machines can entirely replace humans. His work remains a cautionary allegory, particularly relevant as we confront the implications of artificial general intelligence (AGI).
What can we learn from Aldiss’s story?
Over-Reliance Without Oversight: The machines’ dysfunction highlights how systems can falter without clear human guidance. Similarly, generative AI systems require careful oversight to align with human values and goals.
Hierarchical and Narrow Programming: Rigid hierarchies and predefined tasks limit the machines, much like how generative AI today struggles to adapt ethically or contextually outside its training.
Purpose and Alignment: Aldiss’s machines lack purpose without humans in the loop. Similarly, AGI systems need explicit alignment mechanisms to prevent unintended consequences.
Ethical and Social Implications: The story critiques the blind replacement of human labor and decision-making with machines, cautioning against losing sight of human agency and responsibility during technological advancement.
Balancing Innovation with Ethics
Today’s LLMs may not yet be autonomous, but they already challenge the balance between augmenting human capabilities and outright replacement. Aldiss’s story reminds us that technological advancement must go hand-in-hand with ethical safeguards and critical oversight. It’s a lesson we must heed as generative AI shapes the future.
Sorry folks, but ChatGPT is just a tool, like a hammer, pen, spell checker, or Grammarly. Like any tool, ChatGPT can be useful—and it can also be dangerous when used incorrectly.
ChatGPT is a tool called a large language model (LLM). However, marketers love to call ChatGPT and other LLMs artificial intelligence (AI) because this framing ties into our culture’s fascination with understanding who we are, how we’re able to do what we do, and whether we are unique in our abilities or not. This fascination causes us to give human names to familiar objects, like cars, and to anthropomorphize our pets and tools. Harnessing fascination is a good way to sell us stuff.
Artificial intelligence?
I’ve programmed computers for over half a century and taught college computer science from 1983 – 93. Computer researchers began to use the term artificial intelligence in the 1940s and 50s.
In 1993, JET Thomas and I conducted a seminar at Marlboro College on AI. At that time, a second AI winter held sway. Rule and frame-based systems predominated. Researchers had not implemented key components of effective neural networks, our current core AI technology. The best chess-playing programs could beat most players, but Grandmaster Gary Kasparov wouldn’t lose a chess match to a machine until 1997. During the seminar, teachers and students alike shared a fascination with the concept of AI. However, the technology of the time generated output that was completely determined by the rules built into its programs.
Here’s a simple example of the limitations and unreliability of ChatGPT.
The limitations and unreliability of ChatGPT
I asked ChatGPT to rewrite a passive voice sentence in an active voice.
Me: rewrite “Try not to get stuck.” as an active sentence.
ChatGPT: ”Make an effort to avoid getting stuck.”
Me: That’s still a passive sentence!
ChatGPT: Apologies for that oversight. Here’s an active sentence version:
“Strive to avoid getting stuck.”
Me: Still passive!
ChatGPT: I apologize for the continued oversight. Here’s an active sentence version:
“Take proactive steps to avoid getting stuck.”
Me: “getting stuck” is passive, you’ve used it each time. rewrite it.
ChatGPT: Apologies once again for the oversight. Here’s an active sentence with a revised phrase:
“Make an effort to prevent becoming trapped.”
Me: “becoming trapped” is passive too. You’ve failed.
ChatGPT: Apologies for the repeated oversight. Here’s an active sentence with an active phrase:
“Exert effort to evade getting trapped.”
Me: Grammar is not your strong point.
ChatGPT: I apologize for the errors in my previous responses. I appreciate your patience, and I’ll make sure to provide a correct and active sentence for you. Here’s an active version of the sentence:
“Strive to avoid getting stuck.”
<Sigh>.
An automated mansplaining machine
This and numerous other examples of the confident fabrications of ChatGPT have led many to characterize it—fairly in my view—as “an automated mansplaining machine“, “autocomplete on steroids“, and a “bias optimizer“. As the third reference points out, “ChatGPT cannot recontextualize or independently seek out new information that contradicts [its] built-in assumptions.”
In the above example, it would have been refreshing if ChatGPT said at some point, “Sorry, I don’t know how to do that.” Instead, it repeatedly declared it had the answer. Hence the danger of using this tool. You have no guarantee anything ChatGPT avows is actually true! How can we use something we can’t completely trust?
The answer is: Carefully!
ChatGPT and the future of work
In 2019, I wrote about my concerns about the impact of machine learning on the future of work. Recently, people have written thousands of articles about how LLMs like ChatGPT jeopardize the future work prospects of numerous professions.
Some bosses will undoubtedly believe that they can replace some employees with fewer people writing well-prompted responses from ChatGPT. Such beliefs are dangerously naive. Here are three examples of the problems that can arise:
All new technology transforms job scope over time. But only some, like the automobile, eliminate industries. ChatGPT, by making some forms of work more efficient may allow employees to be more productive, meaning a static industry may be able to employ fewer workers. But I don’t see the wholesale future elimination of lawyers, writers, software makers, and other professions that many predict.
ChatGPT is just a tool
My advice is to think of ChatGPT as just a tool. Knowing when and how to use a specific tool is important. For example, as I write this post I’m using Grammarly to check my spelling and grammar. It’s a useful tool that catches most of my errors on the fly and occasionally improves how I express myself. However, it often makes suggestions that are simply incorrect and need to be suppressed. If I relied on Grammarly 100% to proofread my work, errors would slip through. (Regardless, I’m not claiming I’m a perfect editor of my own work!)
Similarly, I find ChatGPT useful for suggesting possible blog post titles, summarizing articles, and brainstorming ways to express ideas in print. However, most of the LLM’s suggestions are not helpful to me; I always need to review them to determine if there’s anything worth using.
In conclusion, treat ChatGPT as a useful but fallible tool, rather than an oracle. And don’t worry too much about its impact on your future work prospects!
The first three of his revolutionary cycles are well established, the fourth is now arriving. Cycles one through three introduced calculation and data storage, connection, and shifting place and time.
Above all, Seth’s fourth cycle adds prediction.
“Call it AI if you want to, but to be specific, it’s a combination of analyzing information and then predicting what we would do if we knew what the computer knew.
…we’re giving those computers the ability to make predictions based on what thousands of people before us have done.
…If you’re a mediocre lawyer or doctor, your job is now in serious jeopardy. The combination of all four of these cycles means that the hive computer is going to do your job better than you can, soon.
With each cycle, the old cycles continue to increase. Better databases, better arithmetic. Better connectivity, more people submitting more data, less emphasis on where you are and more on what you’re connected to and what you’re doing.
…just as we made a massive leap in just fifteen years, the next leap will take less than ten. Because each cycle supports the next one.”
In an earlier post, I wrote about how neural networks can now quickly learn to do certain tasks better than humans with no external examples, only the rules that govern the task environment.
Seth points out that when we supply computers with the huge, rapidly growing databases of human behavior, the fourth cycle becomes even more capable.
In conclusion, Seth ends with:
“Welcome to the fourth cycle. The hive will see you now.“
Not long ago I wrote about the end of decent-paid jobs and the need for basic income. A startling recent advance in machine learning has only heightened my concerns about the future of work. Last month, Google’s subsidiary, DeepMind, published a paper on AlphaZero, an artificial intelligence (AI) the company designed to play games. The AI started with only game rules. Here’s what happened next:
“At first it made random moves. Then it started learning through self-play. Over the course of nine hours, the chess version of the program played forty-four million games against itself on a massive cluster of specialized Google hardware. After two hours, it began performing better than human players; after four, it was beating the best chess engine in the world.” —James Somers, New Yorker, How the Artificial-Intelligence Program AlphaZero Mastered Its Games
From “knowing” nothing about the game, in four hours the program became the strongest chess player in the world. AlphaZero also taught itself to become the world’s best Go and shogi player in a few hours.
As a schoolboy, I played competitive chess for a few years. Although I haven’t played chess seriously since then, I still have a feeling for the game.
I was shocked watching AlphaZero’s tenth game with Stockfish, the strongest open-source chess engine in the world.
I’d describe AlphaZero’s play as completely solid, interspersed with incredible flashes of brilliance. Great human chess players have an uncanny ability to view a position and quickly select a few plausible moves for deeper study out of the many possible legal moves. The best grandmasters occasionally discover a brilliant and unexpected move in a game. AlphaZero found several during this game.
Having seen this game, I’d describe AlphaZero as the most creative, brilliant, and strongest chess player the world has ever seen.
From a novice to best in the world in four hours, is a level of performance that no human can match.
Now think about what would happen if this kind of performance could be achieved in human work environments such as:
medical scan diagnosis;
legal document creation;
engineering design; and
stock market trading.
These are only harder problems than playing a game because:
the problem space is larger; and
the data needed for learning can’t be self-generated by the AI itself and must be supplied by humans.
But these are not insuperable obstacles. If overcome, many high-paid jobs for medical practitioners, lawyers, accountants, and financial analysts would disappear.
So what is the future of work? Are we moving towards a world where the only available work is in low-paid “human service” areas where people are still cheaper than machines? Perhaps.
Until the arrival of robots capable of doing just about everything humans do. What work for humans remains then?
Companies are now marketing services for artificial intelligence matchmaking at events. However, unresolved issues could impede the adoption of this technology, especially by attendees.
Consider this marketing pitch for an artificial intelligence event matchmaking service:
“Using the [AI] platform…it’s easier for attendees to make sure they have the right meetings set up, and for exhibitors to have a higher return on investment in terms of connections with high-quality buyers.” —Tim Groot, CEO Grip, as quoted in What AI Means To Meetings: How Artificial Intelligence will boost ROI, Michael Shapiro, July 2017 Meetings & Conventions Magazine
A win-win for exhibitors and attendees?
Tim describes using artificial intelligence matchmaking at events as a win for both exhibitors and attendees.
I’m skeptical.
Let’s assume, for the moment, that the technology actually works. If so, I think suppliers will reap most of the touted benefits, quite possibly at the expense of attendees. Here’s why.
Successful matchmaking needs digital data about attendees. An AI platform cannot work without this information. Where will the data come from? Tim explains that his service builds a profile for each attendee. Sources include “LinkedIn, Google, and Facebook”, while also “scouring the web for additional information”.
Using social media platform information, even if attendee approval is requested first, creates a slippery slope, as privacy issues in meeting apps remain largely undiscussed and little considered by attendees during the rush of registration. The end result is that the AI matchmaking platform gains a rich reservoir of data about attendees that, without strong verifiable safeguards, may be sold to third parties or even given to suppliers.
In addition, let’s assume that exhibitors get great information about whom to target. The result: “high-value” attendees will be bombarded with even more meeting requests while attendees who don’t fit the platform’s predictions will be neglected.
In my opinion, the best and most likely to succeed third-party services for meetings are those that provide win-win outcomes for everyone concerned. Unfortunately, it’s common (and often self-serving) to overlook a core question about meeting objectives —whom is your event for? — and end up with a “solution” that benefits one set of stakeholders over another.
How well will artificial intelligence matchmaking at events work for attendees?
Artificial intelligence is hot these days, so it’s inevitable that event companies talk about incorporating it into their products, if only because it’s a surefire way to get attention from the meetings industry.
I know something about AI because in the ’80s I was a professor of computer science, and the theory of artificial neural networks — the heart of modern machine learning — was thirty years old. AI had to wait, however, for the introduction of vastly more potent technology to allow practical implementation on today’s computers.
While the combination of powerful computing and well-established AI research is demonstrating incredible progress in areas such as real-time natural language processing and translation, I don’t see why sucking social media and registration data into a database and using AI to look for correlations is going to provide attendee matchmaking that is superior to what can be achieved using participant-driven and participation-rich meeting process combined with attendees’ real-time event experience. (Once again, exhibitors may see a benefit from customized target attendee lists, but I’m looking for a win-win here.)
From the attendee’s point of view
When attendees enter a meeting room there’s a wealth of information available to help make relevant connections. Friends introduce me to people I haven’t yet met. Eavesdropping on conversations opens up more possibilities. Body language and social groupings also provide important potential matchmaking information. An AI matchmaking database includes none of these resources. All of them have led me (and just about everyone who’s ever attended meetings) to professional connections that matter.
Coda
I’ll conclude with a story. The June 2017 PCMA Convene article Can Artificial Intelligence Make You a Better Networker? describes a techsytalk session by Howard Givner where he “gave particular emphasis to the importance of facilitated matchmaking at events.” I like to think that Howard discovered this when he attended the participant-driven and participation-rich EventCamp East Coast I designed and facilitated in 2010, about which he wrote:
“…it was one of the most innovative and eye-opening professional experiences I’ve had. Aside from coming back with lots of new tips and ideas, I easily established triple the number of new contacts, and formed stronger relationships with them, than at any other conference I’ve been to.”