ChatGPT is just a tool

A photograph of a hammer labeled “ChatGPT”Sorry folks, but ChatGPT is just a tool, like a hammer, pen, spell checker, or Grammarly. Like any tool, ChatGPT can be useful—and it can also be dangerous when used incorrectly.

ChatGPT is a tool called a large language model (LLM). However, marketers love to call ChatGPT and other LLMs artificial intelligence (AI) because this framing ties into our culture’s fascination with understanding who we are, how we’re able to do what we do, and whether we are unique in our abilities or not. This fascination causes us to give human names to familiar objects, like cars, and to anthropomorphize our pets and tools. Harnessing fascination is a good way to sell us stuff.

Artificial intelligence?

I’ve programmed computers for over half a century and taught college computer science from 1983 – 93. Computer researchers began to use the term artificial intelligence in the 1940s and 50s.

In 1993, JET Thomas and I conducted a seminar at Marlboro College on AI. At that time, a second AI winter held sway. Rule and frame-based systems predominated. Researchers had not implemented key components of effective neural networks, our current core AI technology. The best chess-playing programs could beat most players, but Grandmaster Gary Kasparov wouldn’t lose a chess match to a machine until 1997. During the seminar, teachers and students alike shared a fascination with the concept of AI. However, the technology of the time generated output that was completely determined by the rules built into its programs.

However, unlike the software available during our 1993 seminar, humans can’t determine how today’s LLMs come up with their responses. And these sophisticated neural networks, often called deep learning, are everywhere. We use them for speech and image recognition, and natural language processing. ChatGPT has been used to “write and debug computer programs, compose music, teleplays, fairy tales and student essays, answer test questions (sometimes, depending on the test, at a level above the average human test-taker), generate business ideas, write poetry and song lyrics, translate and summarize text, emulate a Linux system, simulate entire chat rooms, play games like tic-tac-toe, or simulate an ATM.”

Just because ChatGPT can be asked to do these things doesn’t guarantee that it creates reliable answers. The software’s creator, OpenAI, acknowledges that ChatGPT “sometimes writes plausible-sounding but incorrect or nonsensical answers”.

Here’s a simple example of the limitations and unreliability of ChatGPT.

The limitations and unreliability of ChatGPT

ChatGPT is just a tool
I asked ChatGPT to rewrite a passive voice sentence in an active voice.


Me: rewrite “Try not to get stuck.” as an active sentence.

ChatGPT: ”Make an effort to avoid getting stuck.”

Me: That’s still a passive sentence!

ChatGPT: Apologies for that oversight. Here’s an active sentence version:

“Strive to avoid getting stuck.”

Me: Still passive!

ChatGPT: I apologize for the continued oversight. Here’s an active sentence version:

“Take proactive steps to avoid getting stuck.”

Me: “getting stuck” is passive, you’ve used it each time. rewrite it.

ChatGPT: Apologies once again for the oversight. Here’s an active sentence with a revised phrase:

“Make an effort to prevent becoming trapped.”

Me: “becoming trapped” is passive too. You’ve failed.

ChatGPT: Apologies for the repeated oversight. Here’s an active sentence with an active phrase:

“Exert effort to evade getting trapped.”

Me: Grammar is not your strong point.

ChatGPT: I apologize for the errors in my previous responses. I appreciate your patience, and I’ll make sure to provide a correct and active sentence for you. Here’s an active version of the sentence:

“Strive to avoid getting stuck.”


<Sigh>.

An automated mansplaining machine

This and numerous other examples of the confident fabrications of ChatGPT have led many to characterize it—fairly in my view—as “an automated mansplaining machine“, “autocomplete on steroids“, and a “bias optimizer“. As the third reference points out, “ChatGPT cannot recontextualize or independently seek out new information that contradicts [its] built-in assumptions.”

In the above example, it would have been refreshing if ChatGPT said at some point, “Sorry, I don’t know how to do that.” Instead, it repeatedly declared it had the answer. Hence the danger of using this tool. You have no guarantee anything ChatGPT avows is actually true! How can we use something we can’t completely trust?

The answer is: Carefully!

ChatGPT and the future of work

In 2019, I wrote about my concerns about the impact of machine learning on the future of work. Recently, people have written thousands of articles about how LLMs like ChatGPT jeopardize the future work prospects of numerous professions.

Some bosses will undoubtedly believe that they can replace some employees with fewer people writing well-prompted responses from ChatGPT. Such beliefs are dangerously naive. Here are three examples of the problems that can arise:

All new technology transforms job scope over time. But only some, like the automobile, eliminate industries. ChatGPT, by making some forms of work more efficient may allow employees to be more productive, meaning a static industry may be able to employ fewer workers. But I don’t see the wholesale future elimination of lawyers, writers, software makers, and other professions that many predict.

ChatGPT is just a tool

My advice is to think of ChatGPT as just a tool. Knowing when and how to use a specific tool is important. For example, as I write this post I’m using Grammarly to check my spelling and grammar. It’s a useful tool that catches most of my errors on the fly and occasionally improves how I express myself. However, it often makes suggestions that are simply incorrect and need to be suppressed. If I relied on Grammarly 100% to proofread my work, errors would slip through. (Regardless, I’m not claiming I’m a perfect editor of my own work!)

Similarly, I find ChatGPT useful for suggesting possible blog post titles, summarizing articles, and brainstorming ways to express ideas in print. However, most of the LLM’s suggestions are not helpful to me; I always need to review them to determine if there’s anything worth using.

In conclusion, treat ChatGPT as a useful but fallible tool, rather than an oracle. And don’t worry too much about its impact on your future work prospects!

The job you’ll be doing in the future hasn’t been invented yet

A screenshot of some Google search results for "The job you'll be doing in x years hasn't been invented yet."

“The job you’ll be doing in the future hasn’t been invented yet.

This is old news.

The story of my professional life

1977: I earn a Ph.D. in applied elementary high-energy particle physics. Get a post-doc position and move to the United States. Work at major U.S. particle accelerators for a year. Leave academic research forever.
Since 1978  — that’s 47 years! — every job I’ve had didn’t exist a few years earlier.

1978: I join the management of Solar Alternative, a solar energy manufacturing business founded the previous year. Five years earlier, there were no such businesses in the United States.

1983: I start teaching computer science using personal computers in the classroom. IBM introduced the PC in 1981.

1984: I begin IT consulting for clients using personal computers. Businesses didn’t start using personal computers until the early 80s.

1992: I organize a conference where there are no expert speakers available (it’s a new field, and there are no experts). Invent a way to make the conference successful based on the collective needs, wants, and experience of the attendees. (The conference has run annually for the last 33 years.) This is something new. Organizations hear about this and ask me to design and facilitate their conferences.

2005: I realize that the conference process I invented and since improved is incredibly popular with participants. I decide to write a book about it, and in…

2009: I self-publish Conferences That Work: Creating Events That People Love. (Five years earlier, self-publishing was a minor industry for vanity projects. Now it’s the most common way authors publish.) I quickly discover the size and interest of the meetings industry. In demand, I become a meeting designer and facilitator of participant-driven, participation-rich meetings. Yet another career that had not existed before.

A conventional career

My parents once suggested I become an accountant. I politely declined and continued studying physics. I have nothing against conventional careers, but my life hasn’t turned out that way.

Yet.

If I had to guess, it probably won’t.

And it probably won’t for you either.

Has the job you’re doing now just been invented? Share your experience in the comments below!

The hive mind will see you now

Check out the above video. The hive mind will see you now.

In his post “The fourth cycle of the hive mind (and what to do about it)” Seth Godin explores how advances in computers are changing our world.

The first three of his revolutionary cycles are well established, the fourth is now arriving. Cycles one through three introduced calculation and data storage, connection, and shifting place and time.

Above all, Seth’s fourth cycle adds prediction.

“Call it AI if you want to, but to be specific, it’s a combination of analyzing information and then predicting what we would do if we knew what the computer knew.

…we’re giving those computers the ability to make predictions based on what thousands of people before us have done.

If you’re a mediocre lawyer or doctor, your job is now in serious jeopardy. The combination of all four of these cycles means that the hive computer is going to do your job better than you can, soon.

With each cycle, the old cycles continue to increase. Better databases, better arithmetic. Better connectivity, more people submitting more data, less emphasis on where you are and more on what you’re connected to and what you’re doing.

…just as we made a massive leap in just fifteen years, the next leap will take less than ten. Because each cycle supports the next one.”

In an earlier post, I wrote about how neural networks can now quickly learn to do certain tasks better than humans with no external examples, only the rules that govern the task environment.

Seth points out that when we supply computers with the huge, rapidly growing databases of human behavior, the fourth cycle becomes even more capable.

In conclusion, Seth ends with:

“Welcome to the fourth cycle. The hive will see you now.

Attributions: Clip of AI HR interview and poster image

AlphaZero, machine learning, and the future of work

Not long ago I wrote about the end of decent-paid jobs and the need for basic income. A startling recent advance in machine learning has only heightened my concerns about the future of work. Last month, Google’s subsidiary, DeepMind, published a paper on AlphaZero, an artificial intelligence (AI) the company designed to play games. The AI started with only game rules. Here’s what happened next:

“At first it made random moves. Then it started learning through self-play. Over the course of nine hours, the chess version of the program played forty-four million games against itself on a massive cluster of specialized Google hardware. After two hours, it began performing better than human players; after four, it was beating the best chess engine in the world.”
—James Somers, New Yorker, How the Artificial-Intelligence Program AlphaZero Mastered Its Games

From “knowing” nothing about the game, in four hours the program became the strongest chess player in the world. AlphaZero also taught itself to become the world’s best Go and shogi player in a few hours.

As a schoolboy, I played competitive chess for a few years. Although I haven’t played chess seriously since then, I still have a feeling for the game.

I was shocked watching AlphaZero’s tenth game with Stockfish, the strongest open-source chess engine in the world.

I’d describe AlphaZero’s play as completely solid, interspersed with incredible flashes of brilliance. Great human chess players have an uncanny ability to view a position and quickly select a few plausible moves for deeper study out of the many possible legal moves. The best grandmasters occasionally discover a brilliant and unexpected move in a game. AlphaZero found several during this game.

Having seen this game, I’d describe AlphaZero as the most creative, brilliant, and strongest chess player the world has ever seen.

From a novice to best in the world in four hours, is a level of performance that no human can match.

Now think about what would happen if this kind of performance could be achieved in human work environments such as:

  • medical scan diagnosis;
  • legal document creation;
  • engineering design; and
  • stock market trading.

These are only harder problems than playing a game because:

  • the problem space is larger; and
  • the data needed for learning can’t be self-generated by the AI itself and must be supplied by humans.

But these are not insuperable obstacles. If overcome, many high-paid jobs for medical practitioners, lawyers, accountants, and financial analysts would disappear.

So what is the future of work? Are we moving towards a world where the only available work is in low-paid “human service” areas where people are still cheaper than machines? Perhaps.

Until the arrival of robots capable of doing just about everything humans do. What work for humans remains then?