November 23, 2024

Monday Morning Moan – AI: time to wear a black tie for the death of white-collar jobs?

Good Monday #GoodMonday

(PIxabay)

On 27 January, the SONO Music Group, a record label that talks about nurturing acts so they can flourish as a “collaborative family”, tweeted that it was using ChatGPT to help its artists write, pitch, present themselves, and come up with ideas. This begged the question, if it cared so much about creative people, then why not employ some to do those jobs, rather than an AI? 

“That’s the first option over all”, SONO replied when I asked them. Fair enough. (But please note, creatives are now optional.) However, for many companies the answer to that question is simple: talented humans cost money. Sometimes big money, despite the network effect’s slow march towards paying them in cents or ‘exposure’.

When that message is filtered through the finance and investment media, it becomes “Company X’s backers are excited by the use of cutting-edge technology!” Oh, be honest: they’re turned on by slashing costs (nobody watches The Texas Chainsaw Massacre for character nuance).

Sadly, announcements about the adoption of ChatGPT et al are increasingly commonplace from publishers, marketers, advertisers, and others in 2023. They’re almost like corporate gender reveal parties. I say “sadly” not because I have a problem with AI – I don’t – but because this has happened almost overnight, in the wake of soaring inflation and economic woes. 

In other words, it seems tactical, desperate, a knee-jerk reaction. A rush to associate with buzz, rather than something strategic or on brand. Proudly announcing that you’re not using people for creative work is bold, but unwise if your history is one of investing in human expression. 

It seems zeitgeisty and modern, but only if you consider OpenAI’s system to be an artificial intelligence or an ‘innovative learning engine’. But if you call it what it actually is, a derivative work generator, then raving about it seems absurd. Self-defeating, in fact – apart from those cost savings, of course.

Imagine if that record company had tweeted:

Good news everyone! We’re using a free tool that rewords historic, human-made content – sometimes sourced by OpenAI without permission from its creators – to help our artists be original!

Yet here we are.

King CNET versus the sea

These issues have come to haunt CNET in 2023. The tech news site – one of several publishers to be experimenting with ChatGPT – was found to have been punting AI-generated articles covertly “for months”, according to some reports. Holy brand destruction!

But there’s a bigger problem, say gleeful rivals. Some of the pieces written by ‘CNET Money Staff’ (ChatGPT on those occasions) contained obvious factual errors. No doubt similar errors are now snowballing through other sites too. (The word ‘staff’ in a byline used to be code for ‘press release’, rather than ‘generated by a robot’.)

Other AI-assisted CNET articles reportedly featured plagiarized content, as though this was somehow surprising from a tool that generates derivative work. 

Alas, CNET is far from alone. For example, Forbes reported on 26 January that BuzzFeed was also turning to ChatGPT in “the first step in greater content creation” that employs the GPT engine. 

Meanwhile, BuzzFeed trumpeted a deal with Meta to bring content creators to Facebook and Instagram. Between them, these moves reversed the stock decline that occurred in December after the site had announced a… 12% workforce cut. Spooky.

Reuters quoted BuzzFeed CEO Jonah Peretti saying that the next 15 years will be “defined by AI and data helping create” its content – rather than, say, by original journalism. Human oversight will “still matter”, he added, to provide ideas and “inspired prompts” to ChatGPT. And this from a publisher! 

So, it seems that creative people’s main job is now to poke and feed the machine – one that has no concept of what a human even is, because it isn’t intelligent, and it isn’t sentient; it’s just a tool that upscales historic data. Charlie Brooker’s Black Mirror could hardly have invented a sadder and more ridiculous state of affairs. 

Perhaps one day this era will be called The Great Stupid: the time when once-respected publishers rushed to declare themselves to be in the bot-curation business. After all, why would readers go to any site for AI-generated content if they could simply use ChatGPT themselves? Doesn’t it make that publisher irrelevant? Aren’t CNET, BuzzFeed, et al, just shooting themselves in the face for clicks?

Don’t worry, I’m fine

Now, in case you think all this is sour grapes from a flesh-and-blood content provider: hardly. I’ve been warning about this type of thing for years, and have been paid for saying it (ker-ching!).

In the Before Times, pre-COVID, I once ranted about search engine optimization (SEO) tools pushing writers towards using repetitive, primary-school language to create text that was more visible and attractive to Google. (“Robots are great! Big robots and small robots! ‘Robots, robots, robots’ said A Robot! Humans love robots!” etc.) A client once sent me an A4 list of words he wanted a short article to include – “For SEO”, he explained. That left me with 10 of my own.

The logical output of those trends, I wrote in my (then) blog, is an industry of machine-generated, machine-readable content. One in which humans have become disinterested bystanders – barely even a consideration in the relentless drive towards machine visibility and clicks. A future in which humans’ job is hitting ‘Like’ when a robot publishes something designed for another machine. 

And now look where we are. No wonder more and more of us wander off to Etsy to buy something handmade while we still have an income.

Whiff-WEF

Needless to say, these issues were a hot topic for the World Economic Forum this month. Will AI – gulp! – sweep aside the kind of white-collar professionals who pay for a seat at Davos? Those panellists who say, as one did, “We drink our own champagne” rather than “We eat our own dogfood” (I would say “read the room”, but he did).

For years, the claim that AI, machine learning, robotics, and automation would come for blue-collar jobs first has seemed alarmist and overstated. In general, blue-collar employment remains high in highly automated countries. That’s partly because new products and services arise – jobs working with the machines. Plus, in a global economy, unethical megacorps can always find low-wage humans to employ.

But an AI-triggered bloodbath of lawyers, bankers, stockbrokers, financial advisors, investment analysts, consultants, accountants, marketers, copywriters, journalists, artists, designers, and PR executives – a scenario that lurks in the subtext of films like The Menu and Glass Onion? Well, that would never do!

Among the panellists were Lauren Woodman, CEO of socially focused non-profit DataKind; Mihir Shukla, co-founder and CEO of softbot provider Automation Anywhere; and Professor Erik Brynjolfsson, Director of the Digital Economy Lab at Stanford University.

Woodman’s opening salvo was a good one. She said: 

I worry about the disruption that is potentially coming and whether or not we are training folks to be successful in the next generation. I have no fears of what’s going to happen: technology progresses, it can be used for good. But it’s the transition periods I worry about. Are we preparing governments, society, communities? And are we prepared to support people through that transition? 

The other thing I think about is, in the sector in which I work, are we actually helping the organizations that make communities thrive? Are we helping them think about how their work is going to be disrupted? And how we deploy these tools against those problems, because they could be incredibly powerful and impactful.

Of course, every professional thinks that all the other jobs will be automated, but not theirs. Yet AI is quickly revealing that isn’t true. So, transferrable skills, sector expertise, and an understanding of how to work alongside Industry 4.0 tools will be essential. Yet survey after survey has found that, while companies are rushing to adopt AI, the essential business, IT, and data skills needed to make it work are lacking.

Shukla offered an automation vendor’s oft-repeated perspective and said:

There are about billion knowledge workers. These are the people who are sitting in front of computers […]. What has changed in the last many years is, with the help of AI and robotic process automation, now software bots are able to operate all these applications. 

Anywhere from 15 to 70% of all the work that we do in front of a computer could now be automated, which is truly a watershed moment. If you can process a mortgage application in four minutes instead of 30 days, then you end up processing a lot more and you gain market share. So, this is about doing more [not firing people].

Every RPA CEO is a salesman and will tell you the same thing: intelligent automation means you can do more with less, win new business, process more things, and – drum roll – free up your employees to use their creativity and add value. But there’s a problem: ChatGPT et al are now taking the creative tasks away. So, your new job is typing prompts into a machine. A different kind of drudgery: one that tells you your own skills are valueless.

So, what does an academic think? After sharing an anecdote about partying with a software magnate – if nothing else, the WEF triggers talk of hedonistic abandon whenever job losses are discussed – Stanford’s Professor Brynjolfsson said:

We’ve been talking about exponential improvement in the technology, and how labour institutions and skills organizations aren’t keeping up in that growing gap. But I’m not sure ‘exponential’ is the right metaphor anymore [it’s an adjective Erik!]. It’s more like ‘punctuated equilibrium’, because this is a burst forward of capabilities and these build on top of the other ones. 

There’s always a wave of concern and fear about job losses, and whether or not there’s going to be mass unemployment. In fact, unemployment is at a record low. And so, it’s not really eliminating tens of millions of jobs or anything. What it’s doing is affecting job quality and changing the way that we do the work. 

One of the things that we’re looking at in Stanford is keeping humans in the loop. And the way that can be done is to make the work more fulfilling. To get rid of some of the boring, routine work of filling out invoices, or whatever. And people can focus on some of the more interesting, human-centred parts and connecting.

Spoken like an RPA vendor. But what does that actually mean? Are employees going to be lounging on a virtual hillside somewhere, eating low-res grapes among the Grecian columns of some postmodern Meta box, having Platonic thoughts about the universe? Isn’t their boss just going to tap his/her/their watch and order the company robot to kill them? 

He continued:

I think there’s an opportunity to use the technology in lots of different ways. One of the things it’ll be interesting to see over the next decade is to what extent we do keep humans in the loop and work on creating higher job quality. And not simply looking at doing more of the same more cheaply, or driving down wages.

He added:

Either path is possible.

My take

Decision makers, now is the time to stand for something. To think strategically, not tactically. To actually consider the future, for young and old.

Don’t just grab the short-term advantages of attaching yourself to the latest Twitter or TikTok trend. Don’t rush to say, “We’re using AI!” just because your competitor is. (Perhaps your rival has declared itself obsolete. Why do the same?)

Consider how your business will look in the future as people read what you said today. And ask yourself: who are we working for, and why? And what do they actually need from us?

And good luck.

Leave a Reply